Switch to: Citations

Add references

You must login to add references.
  1. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Whatever next? Predictive brains, situated agents, and the future of cognitive science.Andy Clark - 2013 - Behavioral and Brain Sciences 36 (3):181-204.
    Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to (...)
    Download  
     
    Export citation  
     
    Bookmark   753 citations  
  • (1 other version)Categories.G. Ryle - 1938 - Proceedings of the Aristotelian Society 38:189 - 206.
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • (1 other version)The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...)
    Download  
     
    Export citation  
     
    Bookmark   347 citations  
  • (1 other version)Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI. According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. (...)
    Download  
     
    Export citation  
     
    Bookmark   1758 citations  
  • Fast thinking.Daniel C. Dennett - 1981 - In Daniel Clement Dennett (ed.), The Intentional Stance. MIT Press.
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data.Emily M. Bender & Alexander Koller - 2020 - Proceedings of the Annual Meeting of the Association for Computational Linguistics 58:5185–98.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • (What) Can Deep Learning Contribute to Theoretical Linguistics?Gabe Dupre - 2021 - Minds and Machines 31 (4):617-635.
    Deep learning techniques have revolutionised artificial systems’ performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural language processing, an area once deemed beyond such systems’ reach. Despite their different goals, these successes have suggested that such systems may be pertinent to theoretical linguistics. The competence/performance distinction presents a fundamental barrier to such inferences. While DL systems are trained on linguistic performance, linguistic theories are aimed at competence. Such a barrier has traditionally (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Minds, Brains, and Programs.John Searle - 2003 - In John Heil (ed.), Philosophy of Mind: A Guide and Anthology. New York: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   666 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • (1 other version)Matter and Memory.Henri Bergson - 1911 - The Monist 21:318.
    Download  
     
    Export citation  
     
    Bookmark   141 citations  
  • (1 other version)Matter and Memory.Henri Bergson, Nancy Margaret Paul & W. Scott Palmer - 1911 - International Journal of Ethics 22 (1):101-107.
    Download  
     
    Export citation  
     
    Bookmark   64 citations