Switch to: References

Add citations

You must login to add citations.
  1. Reward-respecting subtasks for model-based reinforcement learning.Richard S. Sutton, Marlos C. Machado, G. Zacharias Holland, David Szepesvari, Finbarr Timbers, Brian Tanner & Adam White - 2023 - Artificial Intelligence 324 (C):104001.
    Download  
     
    Export citation  
     
    Bookmark  
  • Mario Becomes Cognitive.Fabian Schrodt, Jan Kneissler, Stephan Ehrenfeld & Martin V. Butz - 2017 - Topics in Cognitive Science 9 (2):343-373.
    In line with Allen Newell's challenge to develop complete cognitive architectures, and motivated by a recent proposal for a unifying subsymbolic computational theory of cognition, we introduce the cognitive control architecture SEMLINCS. SEMLINCS models the development of an embodied cognitive agent that learns discrete production rule-like structures from its own, autonomously gathered, continuous sensorimotor experiences. Moreover, the agent uses the developing knowledge to plan and control environmental interactions in a versatile, goal-directed, and self-motivated manner. Thus, in contrast to several well-known (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning robot control.Stefan Schaal - 2002 - In Michael A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press. pp. 2--983.
    Download  
     
    Export citation  
     
    Bookmark  
  • Resource Rationality.Thomas F. Icard - manuscript
    Theories of rational decision making often abstract away from computational and other resource limitations faced by real agents. An alternative approach known as resource rationality puts such matters front and center, grounding choice and decision in the rational use of finite resources. Anticipated by earlier work in economics and in computer science, this approach has recently seen rapid development and application in the cognitive sciences. Here, the theory of rationality plays a dual role, both as a framework for normative assessment (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Humans can navigate complex graph structures acquired during latent learning.Milena Rmus, Harrison Ritz, Lindsay E. Hunter, Aaron M. Bornstein & Amitai Shenhav - 2022 - Cognition 225 (C):105103.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Developing PFC Representations Using Reinforcement Learning.Jeremy R. Reynolds & Randall C. O’Reilly - 2009 - Cognition 113 (3):281-292.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Hierarchical Evolution in Human Vision Modeling.Dana H. Ballard & Ruohan Zhang - 2021 - Topics in Cognitive Science 13 (2):309-328.
    Ballard and Zhang offer a fascinating review of how computational models of human vision have evolved since David Marr proposed his Tri‐Level Hypothesis, with a focus on the refinement of algorithm descriptions over time.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Event‐Predictive Cognition: A Root for Conceptual Human Thought.Martin V. Butz, Asya Achimova, David Bilkey & Alistair Knott - 2021 - Topics in Cognitive Science 13 (1):10-24.
    Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Cognitive Modeling of Automation Adaptation in a Time Critical Task.Junya Morita, Kazuhisa Miwa, Akihiro Maehigashi, Hitoshi Terai, Kazuaki Kojima & Frank E. Ritter - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Hierarchically organized behavior and its neural foundations: A reinforcement-learning perspective.Andrew C. Barto Matthew M. Botvinick, Yael Niv - 2009 - Cognition 113 (3):262.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Overlapping layered learning.Patrick MacAlpine & Peter Stone - 2018 - Artificial Intelligence 254 (C):21-43.
    Download  
     
    Export citation  
     
    Bookmark  
  • Hierarchical clustering optimizes the tradeoff between compositionality and expressivity of task structures for flexible reinforcement learning.Rex G. Liu & Michael J. Frank - 2022 - Artificial Intelligence 312 (C):103770.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Mental Representation of Human Action.Sydney Levine, Alan M. Leslie & John Mikhail - 2018 - Cognitive Science 42 (4):1229-1264.
    Various theories of moral cognition posit that moral intuitions can be understood as the output of a computational process performed over structured mental representations of human action. We propose that action plan diagrams—“act trees”—can be a useful tool for theorists to succinctly and clearly present their hypotheses about the information contained in these representations. We then develop a methodology for using a series of linguistic probes to test the theories embodied in the act trees. In Study 1, we validate the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Learning agents that acquire representations of social groups.Joel Z. Leibo, Alexander Sasha Vezhnevets, Maria K. Eckstein, John P. Agapiou & Edgar A. Duéñez-Guzmán - 2022 - Behavioral and Brain Sciences 45.
    Humans are learning agents that acquire social group representations from experience. Here, we discuss how to construct artificial agents capable of this feat. One approach, based on deep reinforcement learning, allows the necessary representations to self-organize. This minimizes the need for hand-engineering, improving robustness and scalability. It also enables “virtual neuroscience” research on the learned representations.
    Download  
     
    Export citation  
     
    Bookmark  
  • Expanding horizons in reinforcement learning for curious exploration and creative planning.Dale Zhou & Aaron M. Bornstein - 2024 - Behavioral and Brain Sciences 47:e118.
    Curiosity and creativity are expressions of the trade-off between leveraging that with which we are familiar or seeking out novelty. Through the computational lens of reinforcement learning, we describe how formulating the value of information seeking and generation via their complementary effects on planning horizons formally captures a range of solutions to striking this balance.
    Download  
     
    Export citation  
     
    Bookmark  
  • Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots.Varun Raj Kompella, Marijn Stollenga, Matthew Luciw & Juergen Schmidhuber - 2017 - Artificial Intelligence 247 (C):313-335.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Abstraction from demonstration for efficient reinforcement learning in high-dimensional domains.Luis C. Cobo, Kaushik Subramanian, Charles L. Isbell, Aaron D. Lanterman & Andrea L. Thomaz - 2014 - Artificial Intelligence 216 (C):103-128.
    Download  
     
    Export citation  
     
    Bookmark  
  • Events and Machine Learning.Augustus Hebblewhite, Jakob Hohwy & Tom Drummond - 2021 - Topics in Cognitive Science 13 (1):243-247.
    Topics in Cognitive Science, Volume 13, Issue 1, Page 243-247, January 2021.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reinforcement learning and higher level cognition: Introduction to special issue.Nathaniel D. Daw & Michael J. Frank - 2009 - Cognition 113 (3):259-261.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Rationalizable Irrationalities of Choice.Peter Dayan - 2014 - Topics in Cognitive Science 6 (2):204-228.
    Although seemingly irrational choice abounds, the rules governing these mis‐steps that might provide hints about the factors limiting normative behavior are unclear. We consider three experimental tasks, which probe different aspects of non‐normative choice under uncertainty. We argue for systematic statistical, algorithmic, and implementational sources of irrationality, including incomplete evaluation of long‐run future utilities, Pavlovian actions, and habits, together with computational and statistical noise and uncertainty. We suggest structural and functional adaptations that minimize their maladaptive effects.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Toward a Unified Sub-symbolic Computational Theory of Cognition.Martin V. Butz - 2016 - Frontiers in Psychology 7:171252.
    This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Event‐Predictive Cognition: A Root for Conceptual Human Thought.Martin V. Butz, Asya Achimova, David Bilkey & Alistair Knott - 2021 - Topics in Cognitive Science 13 (1):10-24.
    Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective.Matthew M. Botvinick, Yael Niv & Andrew C. Barto - 2009 - Cognition 113 (3):262-280.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective.Matthew M. Botvinick, Yael Niv & Andew G. Barto - 2009 - Cognition 113 (3):262-280.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Krister Segerberg on Logic of Actions.Robert Trypuz (ed.) - 2013 - Dordrecht, Netherland: Springer Verlag.
    Belief revision from the point of view of doxastic logic. Logic Journal of the IGPL, 3(4), 535–553. Segerberg, K. (1995). Conditional action. In G. Crocco, L. Fariñas, & A. Herzig (Eds.), Conditionals: From philosophy to computer science, Studies ...
    Download  
     
    Export citation  
     
    Bookmark   2 citations