Switch to: References

Add citations

You must login to add citations.
  1. Darwin's mistake: Explaining the discontinuity between human and nonhuman minds.Derek C. Penn, Keith J. Holyoak & Daniel J. Povinelli - 2008 - Behavioral and Brain Sciences 31 (2):109-130.
    Over the last quarter century, the dominant tendency in comparative cognitive psychology has been to emphasize the similarities between human and nonhuman minds and to downplay the differences as (Darwin 1871). In the present target article, we argue that Darwin was mistaken: the profound biological continuity between human and nonhuman animals masks an equally profound discontinuity between human and nonhuman minds. To wit, there is a significant discontinuity in the degree to which human and nonhuman animals are able to approximate (...)
    Download  
     
    Export citation  
     
    Bookmark   228 citations  
  • Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • (1 other version)The language of thought hypothesis.Murat Aydede - 2010 - Stanford Encyclopedia of Philosophy.
    A comprehensive introduction to the Language of Though Hypothesis (LOTH) accessible to general audiences. LOTH is an empirical thesis about thought and thinking. For their explication, it postulates a physically realized system of representations that have a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Towards structural systematicity in distributed, statically bound visual representations.Shimon Edelman & Nathan Intrator - 2003 - Cognitive Science 27 (1):73-109.
    The problem of representing the spatial structure of images, which arises in visual object processing, is commonly described using terminology borrowed from propositional theories of cognition, notably, the concept of compositionality. The classical propositional stance mandates representations composed of symbols, which stand for atomic or composite entities and enter into arbitrarily nested relationships. We argue that the main desiderata of a representational system—productivity and systematicity—can (indeed, for a number of reasons, should) be achieved without recourse to the classical, proposition‐like compositionality. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The language of thought and natural language understanding.Jonathan Knowles - 1998 - Analysis 58 (4):264-272.
    Stephen Laurence and Eric Margolis have recently argued that certain kinds of regress arguments against the language of thought (LOT) hypothesis as an account of how we understand natural languages have been answered incorrectly or inadequately by supporters of LOT ('Regress arguments against the language of thought', Analysis, 57 (1), 60-6, J 97). They argue further that this does not undermine the LOT hypothesis, since the main sources of support for LOT are (or might be) independent of it providing an (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What the <0.70, 1.17, 0.99, 1.07> is a Symbol?Istvan S. N. Berkeley - 2008 - Minds and Machines 18 (1):93-105.
    The notion of a ‘symbol’ plays an important role in the disciplines of Philosophy, Psychology, Computer Science, and Cognitive Science. However, there is comparatively little agreement on how this notion is to be understood, either between disciplines, or even within particular disciplines. This paper does not attempt to defend some putatively ‘correct’ version of the concept of a ‘symbol.’ Rather, some terminological conventions are suggested, some constraints are proposed and a taxonomy of the kinds of issue that give rise to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Using extra output learning to insert a symbolic theory into a connectionist network.M. R. W. Dawson, D. A. Medler, D. B. McCaughan, L. Willson & M. Carbonaro - 2000 - Minds and Machines 10 (2):171-201.
    This paper examines whether a classical model could be translated into a PDP network using a standard connectionist training technique called extra output learning. In Study 1, standard machine learning techniques were used to create a decision tree that could be used to classify 8124 different mushrooms as being edible or poisonous on the basis of 21 different Features (Schlimmer, 1987). In Study 2, extra output learning was used to insert this decision tree into a PDP network being trained on (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Exhibiting verses explaining systematicity: A reply to Hadley and Hayward. [REVIEW]Kenneth Aizawa - 1997 - Minds and Machines 7 (1):39-55.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Explaining systematicity: A reply to Kenneth Aizawa. [REVIEW]Robert F. Hadley - 1997 - Minds and Machines 12 (4):571-79.
    In his discussion of results which I (with Michael Hayward) recently reported in this journal, Kenneth Aizawa takes issue with two of our conclusions, which are: (a) that our connectionist model provides a basis for explaining systematicity within the realm of sentence comprehension, and subject to a limited range of syntax (b) that the model does not employ structure-sensitive processing, and that this is clearly true in the early stages of the network''s training. Ultimately, Aizawa rejects both (a) and (b) (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations