Switch to: References

Add citations

You must login to add citations.
  1. Structural representations do not meet the job description challenge.Marco Facchin - 2021 - Synthese 199 (3-4):5479-5508.
    Structural representations are increasingly popular in philosophy of cognitive science. A key virtue they seemingly boast is that of meeting Ramsey's job description challenge. For this reason, structural representations appear tailored to play a clear representational role within cognitive architectures. Here, however, I claim that structural representations do not meet the job description challenge. This is because even our most demanding account of their functional profile is satisfied by at least some receptors, which paradigmatically fail the job description challenge. Hence, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Deep learning and cognitive science.Pietro Perconti & Alessio Plebe - 2020 - Cognition 203:104365.
    In recent years, the family of algorithms collected under the term ``deep learning'' has revolutionized artificial intelligence, enabling machines to reach human-like performances in many complex cognitive tasks. Although deep learning models are grounded in the connectionist paradigm, their recent advances were basically developed with engineering goals in mind. Despite of their applied focus, deep learning models eventually seem fruitful for cognitive purposes. This can be thought as a kind of biological exaptation, where a physiological structure becomes applicable for a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Understanding Structural Representations.Marc Artiga - forthcoming - British Journal for the Philosophy of Science.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Unbearable Shallow Understanding of Deep Learning.Alessio Plebe & Giorgio Grasso - 2019 - Minds and Machines 29 (4):515-553.
    This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations