Switch to: References

Citations of:

Where Do Features Come From?

Cognitive Science 38 (6):1078-1101 (2014)

Add citations

You must login to add citations.
  1. Troubles with mathematical contents.Marco Facchin - forthcoming - Philosophical Psychology.
    To account for the explanatory role representations play in cognitive science, Egan’s deflationary account introduces a distinction between cognitive and mathematical contents. According to that account, only the latter are genuine explanatory posits of cognitive-scientific theories, as they represent the arguments and values cognitive devices need to represent to compute. Here, I argue that the deflationary account suffers from two important problems, whose roots trace back to the introduction of mathematical contents. First, I will argue that mathematical contents do not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are Generative Models Structural Representations?Marco Facchin - 2021 - Minds and Machines 31 (2):277-303.
    Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only argument (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bayesian cognitive science, predictive brains, and the nativism debate.Matteo Colombo - 2018 - Synthese 195 (11):4817-4838.
    The rise of Bayesianism in cognitive science promises to shape the debate between nativists and empiricists into more productive forms—or so have claimed several philosophers and cognitive scientists. The present paper explicates this claim, distinguishing different ways of understanding it. After clarifying what is at stake in the controversy between nativists and empiricists, and what is involved in current Bayesian cognitive science, the paper argues that Bayesianism offers not a vindication of either nativism or empiricism, but one way to talk (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computational Functionalism for the Deep Learning Era.Ezequiel López-Rubio - 2018 - Minds and Machines 28 (4):667-688.
    Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Bayesian cognitive science, predictive brains, and the nativism debate.Matteo Colombo - 2017 - Synthese:1-22.
    The rise of Bayesianism in cognitive science promises to shape the debate between nativists and empiricists into more productive forms—or so have claimed several philosophers and cognitive scientists. The present paper explicates this claim, distinguishing different ways of understanding it. After clarifying what is at stake in the controversy between nativists and empiricists, and what is involved in current Bayesian cognitive science, the paper argues that Bayesianism offers not a vindication of either nativism or empiricism, but one way to talk (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interactive Activation and Mutual Constraint Satisfaction in Perception and Cognition.James L. McClelland, Daniel Mirman, Donald J. Bolger & Pranav Khaitan - 2014 - Cognitive Science 38 (6):1139-1189.
    In a seminal 1977 article, Rumelhart argued that perception required the simultaneous use of multiple sources of information, allowing perceivers to optimally interpret sensory information at many levels of representation in real time as information arrives. Building on Rumelhart's arguments, we present the Interactive Activation hypothesis—the idea that the mechanism used in perception and comprehension to achieve these feats exploits an interactive activation process implemented through the bidirectional propagation of activation among simple processing units. We then examine the interactive activation (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations