Switch to: References

Add citations

You must login to add citations.
  1. Discovering hidden structure in factored MDPs.Andrey Kolobov, Mausam & Daniel S. Weld - 2012 - Artificial Intelligence 189 (C):19-47.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Real-time dynamic programming for Markov decision processes with imprecise probabilities.Karina V. Delgado, Leliane N. de Barros, Daniel B. Dias & Scott Sanner - 2016 - Artificial Intelligence 230 (C):192-223.
    Download  
     
    Export citation  
     
    Bookmark  
  • Planning under time constraints in stochastic domains.Thomas Dean, Leslie Pack Kaelbling, Jak Kirman & Ann Nicholson - 1995 - Artificial Intelligence 76 (1-2):35-74.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Abstraction and approximate decision-theoretic planning.Richard Dearden & Craig Boutilier - 1997 - Artificial Intelligence 89 (1-2):219-283.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Weak, strong, and strong cyclic planning via symbolic model checking.A. Cimatti, M. Pistore, M. Roveri & P. Traverso - 2003 - Artificial Intelligence 147 (1-2):35-84.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Exploiting redundancy for flexible behavior: Unsupervised learning in a modular sensorimotor control architecture.Martin V. Butz, Oliver Herbort & Joachim Hoffmann - 2007 - Psychological Review 114 (4):1015-1046.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The factored policy-gradient planner.Olivier Buffet & Douglas Aberdeen - 2009 - Artificial Intelligence 173 (5-6):722-747.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sequential Monte Carlo in reachability heuristics for probabilistic planning.Daniel Bryce, Subbarao Kambhampati & David E. Smith - 2008 - Artificial Intelligence 172 (6-7):685-715.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A dynamical systems perspective on agent-environment interaction.Randall D. Beer - 1995 - Artificial Intelligence 72 (1-2):173-215.
    Download  
     
    Export citation  
     
    Bookmark   127 citations  
  • Depth-based short-sighted stochastic shortest path problems.Felipe W. Trevizan & Manuela M. Veloso - 2014 - Artificial Intelligence 216 (C):179-205.
    Download  
     
    Export citation  
     
    Bookmark  
  • Learning metric-topological maps for indoor mobile robot navigation.Sebastian Thrun - 1998 - Artificial Intelligence 99 (1):21-71.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Model-based average reward reinforcement learning.Prasad Tadepalli & DoKyeong Ok - 1998 - Artificial Intelligence 100 (1-2):177-224.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • State space search nogood learning: Online refinement of critical-path dead-end detectors in planning.Marcel Steinmetz & Jörg Hoffmann - 2017 - Artificial Intelligence 245 (C):1-37.
    Download  
     
    Export citation  
     
    Bookmark  
  • Controlling the learning process of real-time heuristic search.Masashi Shimbo & Toru Ishida - 2003 - Artificial Intelligence 146 (1):1-41.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning how to combine sensory-motor functions into a robust behavior.Benoit Morisset & Malik Ghallab - 2008 - Artificial Intelligence 172 (4-5):392-412.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sequential plan recognition: An iterative approach to disambiguating between hypotheses.Reuth Mirsky, Roni Stern, Kobi Gal & Meir Kalech - 2018 - Artificial Intelligence 260 (C):51-73.
    Download  
     
    Export citation  
     
    Bookmark  
  • From implicit skills to explicit knowledge: a bottom‐up model of skill learning.Edward Merrillb & Todd Petersonb - 2001 - Cognitive Science 25 (2):203-244.
    This paper presents a skill learning model CLARION. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun, 1995), with a combination of localist (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Consistency and Variation in Reasoning About Physical Assembly.William P. McCarthy, David Kirsh & Judith E. Fan - 2023 - Cognitive Science 47 (12):e13397.
    The ability to reason about how things were made is a pervasive aspect of how humans make sense of physical objects. Such reasoning is useful for a range of everyday tasks, from assembling a piece of furniture to making a sandwich and knitting a sweater. What enables people to reason in this way even about novel objects, and how do people draw upon prior experience with an object to continually refine their understanding of how to create it? To explore these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Contingent planning under uncertainty via stochastic satisfiability.Stephen M. Majercik & Michael L. Littman - 2003 - Artificial Intelligence 147 (1-2):119-162.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Probabilistic planning with clear preferences on missing information.Maxim Likhachev & Anthony Stentz - 2009 - Artificial Intelligence 173 (5-6):696-721.
    Download  
     
    Export citation  
     
    Bookmark  
  • Multiple perspective dynamic decision making.Tze Yun Leong - 1998 - Artificial Intelligence 105 (1-2):209-261.
    Download  
     
    Export citation  
     
    Bookmark  
  • Minimax real-time heuristic search.Sven Koenig - 2001 - Artificial Intelligence 129 (1-2):165-197.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance.W. Bradley Knox & Peter Stone - 2015 - Artificial Intelligence 225 (C):24-50.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deliberation for autonomous robots: A survey.Félix Ingrand & Malik Ghallab - 2017 - Artificial Intelligence 247 (C):10-44.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • An anytime algorithm for constrained stochastic shortest path problems with deterministic policies.Sungkweon Hong & Brian C. Williams - 2023 - Artificial Intelligence 316 (C):103846.
    Download  
     
    Export citation  
     
    Bookmark  
  • LAO∗: A heuristic search algorithm that finds solutions with loops.Eric A. Hansen & Shlomo Zilberstein - 2001 - Artificial Intelligence 129 (1-2):35-62.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Robot shaping: developing autonomous agents through learning.Marco Dorigo & Marco Colombetti - 1994 - Artificial Intelligence 71 (2):321-370.
    Download  
     
    Export citation  
     
    Bookmark   2 citations