Switch to: Citations

Add references

You must login to add references.
  1. Consciousness: An afterthought.Stevan Harnad - 1982 - Cognition and Brain Theory 5:29-47.
    There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it.
    Download  
     
    Export citation  
     
    Bookmark   143 citations  
  • Symbol grounding is an empirical problem: Neural nets are just a candidate component.Stevan Harnad - 1993
    "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...)
    Download  
     
    Export citation  
     
    Bookmark   349 citations  
  • Other bodies, other minds: A machine incarnation of an old philosophical problem. [REVIEW]Stevan Harnad - 1991 - Minds and Machines 1 (1):43-54.
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...)
    Download  
     
    Export citation  
     
    Bookmark   89 citations  
  • The Turing test is not a trick: Turing indistinguishability is a scientific criterion.Stevan Harnad - 1992 - SIGART Bulletin 3 (4):9-10.
    It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial life: Synthetic versus virtual.Stevan Harnad - 1993 - In Chris Langton (ed.), Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI: 539ff. Addison-Wesley.
    Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast, are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive than a virtual (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Turing indistinguishability and the blind watchmaker.Stevan Harnad - 2002 - In James H. Fetzer (ed.), Consciousness Evolving. John Benjamins. pp. 3-18.
    Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • (1 other version)Minds, machines and Searle.Stevan Harnad - 1989 - Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • (1 other version)Minds, machines and Searle.Stevan Harnad - 1989 - Journal of Theoretical and Experimental Artificial Intelligence 1:5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Virtual symposium on virtual mind.Patrick Hayes, Stevan Harnad, Donald Perlis & Ned Block - 1992 - Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • What Are the Scope and Limits of Radical Behaviorist Theory?Stevan Harnad - 1984 - Behavioral and Brain Sciences 7 (4):720.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Grounding Symbolic Capacity in Robotic Capacity.Stevan Harnad - unknown
    According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, every physical implementation of the right (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life.Stevan Harnad - 1994 - Artificial Life 1 (3):93-301.
    Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Connecting object to symbol in modeling cognition.Stevan Harnad - 1992 - In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer Verlag. pp. 75--90.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • (1 other version)Grounding symbols in the analog world with neural nets.Stevan Harnad - 1993 - Think (misc) 2 (1):12-78.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations