Switch to: References

Add citations

You must login to add citations.
  1. The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science.Nick Chater, Noah Goodman, Thomas L. Griffiths, Charles Kemp, Mike Oaksford & Joshua B. Tenenbaum - 2011 - Behavioral and Brain Sciences 34 (4):194-196.
    If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Where Do Features Come From?Geoffrey Hinton - 2014 - Cognitive Science 38 (6):1078-1101.
    It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors. One (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Language Evolution by Iterated Learning With Bayesian Agents.Thomas L. Griffiths & Michael L. Kalish - 2007 - Cognitive Science 31 (3):441-480.
    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Bayesian methods for supervised neural networks.David Barber - 2002 - In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press.
    Download  
     
    Export citation  
     
    Bookmark