Switch to: References

Add citations

You must login to add citations.
  1. Investigating the properties of neural network representations in reinforcement learning.Han Wang, Erfan Miahi, Martha White, Marlos C. Machado, Zaheer Abbas, Raksha Kumaraswamy, Vincent Liu & Adam White - 2024 - Artificial Intelligence 330 (C):104100.
    Download  
     
    Export citation  
     
    Bookmark  
  • A comparison of distributed machine learning methods for the support of “many labs” collaborations in computational modeling of decision making.Lili Zhang, Himanshu Vashisht, Andrey Totev, Nam Trinh & Tomas Ward - 2022 - Frontiers in Psychology 13.
    Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way to increase overall (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cross‐Situational Word Learning With Multimodal Neural Networks.Wai Keen Vong & Brenden M. Lake - 2022 - Cognitive Science 46 (4).
    Cognitive Science, Volume 46, Issue 4, April 2022.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing.Marcus Tomalin, Bill Byrne, Shauna Concannon, Danielle Saunders & Stefanie Ullmann - 2021 - Ethics and Information Technology 23 (3):419-433.
    This article probes the practical ethical implications of AI system design by reconsidering the important topic of bias in the datasets used to train autonomous intelligent systems. The discussion draws on recent work concerning behaviour-guiding technologies, and it adopts a cautious form of technological utopianism by assuming it is potentially beneficial for society at large if AI systems are designed to be comparatively free from the biases that characterise human behaviour. However, the argument presented here critiques the common well-intentioned requirement (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The role of memory consolidation in generalisation of new linguistic information.Jakke Tamminen, Matthew H. Davis, Marjolein Merkx & Kathleen Rastle - 2012 - Cognition 125 (1):107-112.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • 深層学習の哲学的意義.Takayuki Suzuki - 2021 - Kagaku Tetsugaku 53 (2):151-167.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Time-Based Binding as a Solution to and a Limitation for Flexible Cognition.Mehdi Senoussi, Pieter Verbeke & Tom Verguts - 2022 - Frontiers in Psychology 12.
    Why can’t we keep as many items as we want in working memory? It has long been debated whether this resource limitation is a bug or instead a feature. We propose that the resource limitation is a consequence of a useful feature. Specifically, we propose that flexible cognition requires time-based binding, and time-based binding necessarily limits the number of memoranda that can be stored simultaneously. Time-based binding is most naturally instantiated via neural oscillations, for which there exists ample experimental evidence. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and modern planned economies: a discussion on methods and institutions.Spyridon Samothrakis - forthcoming - AI and Society:1-12.
    Interest in computerised central economic planning (CCEP) has seen a resurgence, as there is strong demand for an alternative vision to modern free (or not so free) market liberal capitalism. Given the close links of CCEP with what we would now broadly call artificial intelligence (AI)—e.g. optimisation, game theory, function approximation, machine learning, automated reasoning—it is reasonable to draw direct analogues and perform an analysis that would help identify what commodities and institutions we should see for a CCEP programme to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to Learn Multiple Tasks.Raffaele Calabretta, Andrea Ferdinanddio, Domenico Parisi & Frank C. Keil - 2008 - Biological Theory 3 (1):30-41.
    The article examines the question of how learning multiple tasks interacts with neural architectures and the flow of information through those architectures. It approaches the question by using the idealization of an artificial neural network where it is possible to ask more precise questions about the effects of modular versus nonmodular architectures as well as the effects of sequential versus simultaneous learning of tasks. A prior work has demonstrated a clear advantage of modular architectures when the two tasks must be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Epistemology of Forgetting.Kourken Michaelian - 2011 - Erkenntnis 74 (3):399-424.
    The default view in the epistemology of forgetting is that human memory would be epistemically better if we were not so susceptible to forgetting—that forgetting is in general a cognitive vice. In this paper, I argue for the opposed view: normal human forgetting—the pattern of forgetting characteristic of cognitively normal adult human beings—approximates a virtue located at the mean between the opposed cognitive vices of forgetting too much and remembering too much. I argue, first, that, for any finite cognizer, a (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Computational Evidence That Frequency Trajectory Theory Does Not Oppose But Emerges From Age‐of‐Acquisition Theory.Martial Mermillod, Patrick Bonin, Alain Méot, Ludovic Ferrand & Michel Paindavoine - 2012 - Cognitive Science 36 (8):1499-1531.
    According to the age-of-acquisition hypothesis, words acquired early in life are processed faster and more accurately than words acquired later. Connectionist models have begun to explore the influence of the age/order of acquisition of items (and also their frequency of encounter). This study attempts to reconcile two different methodological and theoretical approaches (proposed by Lambon Ralph & Ehsan, 2006 and Zevin & Seidenberg, 2002) to age-limited learning effects. The current simulations extend the findings reported by Zevin and Seidenberg (2002) that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Asymmetric interference in 3‐ to 4‐month‐olds' sequential category learning.Denis Mareschal, Paul C. Quinn & Robert M. French - 2002 - Cognitive Science 26 (3):377-389.
    Three‐ to 4‐month‐old infants show asymmetric exclusivity in the acquisition of cat and dog perceptual categories. The cat perceptual category excludes dog exemplars, but the dog perceptual category does not exclude cat exemplars. We describe a connectionist autoencoder model of perceptual categorization that shows the same asymmetries as infants. The model predicts the presence of asymmetric retroactive interference when infants acquire cat and dog categories sequentially. A subsequent experiment conducted with 3‐ to 4‐month‐olds verifies the predicted pattern of looking time (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • CVPR 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions.Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji & Davide Maltoni - 2022 - Artificial Intelligence 303 (C):103635.
    Download  
     
    Export citation  
     
    Bookmark  
  • Generalization through the recurrent interaction of episodic memories: A model of the hippocampal system.Dharshan Kumaran & James L. McClelland - 2012 - Psychological Review 119 (3):573-616.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Neural dynamics of autistic behaviors: Cognitive, emotional, and timing substrates.Stephen Grossberg & Don Seidman - 2006 - Psychological Review 113 (3):483-525.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Labels as Features (Not Names) for Infant Categorization: A Neurocomputational Approach.Valentina Gliozzi, Julien Mayor, Jon-Fan Hu & Kim Plunkett - 2009 - Cognitive Science 33 (4):709-738.
    A substantial body of experimental evidence has demonstrated that labels have an impact on infant categorization processes. Yet little is known regarding the nature of the mechanisms by which this effect is achieved. We distinguish between two competing accounts: supervised name‐based categorization and unsupervised feature‐based categorization. We describe a neurocomputational model of infant visual categorization, based on self‐organizing maps, that implements the unsupervised feature‐based approach. The model successfully reproduces experiments demonstrating the impact of labeling on infant visual categorization reported in (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Lexical competition and the acquisition of novel words.M. Gareth Gaskell & Nicolas Dumay - 2003 - Cognition 89 (2):105-132.
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Sequential Presentation Protects Working Memory From Catastrophic Interference.Ansgar D. Endress & Szilárd Szabó - 2020 - Cognitive Science 44 (5):e12828.
    Neural network models of memory are notorious for catastrophic interference: Old items are forgotten as new items are memorized (French, 1999; McCloskey & Cohen, 1989). While working memory (WM) in human adults shows severe capacity limitations, these capacity limitations do not reflect neural network style catastrophic interference. However, our ability to quickly apprehend the numerosity of small sets of objects (i.e., subitizing) does show catastrophic capacity limitations, and this subitizing capacity and WM might reflect a common capacity. Accordingly, computational investigations (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Overnight lexical consolidation revealed by speech segmentation.Nicolas Dumay & M. Gareth Gaskell - 2012 - Cognition 123 (1):119-132.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Resonant Dynamics of Grounded Cognition: Explanation of Behavioral and Neuroimaging Data Using the ART Neural Network.Dražen Domijan & Mia Šetić - 2016 - Frontiers in Psychology 7.
    Download  
     
    Export citation  
     
    Bookmark  
  • Analyzing Machine‐Learned Representations: A Natural Language Case Study.Ishita Dasgupta, Demi Guo, Samuel J. Gershman & Noah D. Goodman - 2020 - Cognitive Science 44 (12):e12925.
    As modern deep networks become more complex, and get closer to human‐like capabilities in certain domains, the question arises as to how the representations and decision rules they learn compare to the ones in humans. In this work, we study representations of sentences in one such artificial system for natural language processing. We first present a diagnostic test dataset to examine the degree of abstract composable structure represented. Analyzing performance on these diagnostic tests indicates a lack of systematicity in representations (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Now-or-Never bottleneck: A fundamental constraint on language.Morten H. Christiansen & Nick Chater - 2016 - Behavioral and Brain Sciences 39:e62.
    Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this “Now-or-Never” bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: (1) the language system must “eagerly” recode and compress linguistic input; (2) as the bottleneck recurs at each new representational level, the language system must build (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Squeezing through the Now-or-Never bottleneck: Reconnecting language processing, acquisition, change, and structure.Nick Chater & Morten H. Christiansen - 2016 - Behavioral and Brain Sciences 39:e91.
    If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Sparse distributed memory: understanding the speed and robustness of expert memory.Marcelo S. Brogliato, Daniel M. Chada & Alexandre Linhares - 2014 - Frontiers in Human Neuroscience 8.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning and development in neural networks – the importance of prior experience.Gerry T. M. Altmann - 2002 - Cognition 85 (2):B43-B50.
    Download  
     
    Export citation  
     
    Bookmark   19 citations