Switch to: References

Add citations

You must login to add citations.
  1. A Pluralist Perspective on Shape Constancy.E. J. Green - forthcoming - The British Journal for the Philosophy of Science.
    The ability to perceive the shapes of things as enduring through changes in how they stimulate our sense organs is vital to our sense of stability in the world. But what sort of capacity is shape constancy, and how is it reflected in perceptual experience? This paper defends a pluralist account of shape constancy: There are multiple kinds of shape constancy centered on geometrical properties at various levels of abstraction, and properties at these various levels feature in the content of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Scene context automatically drives predictions of object transformations.Giacomo Aldegheri, Surya Gayet & Marius V. Peelen - 2023 - Cognition 238 (C):105521.
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in predicting (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.Joshua C. Peterson, Joshua T. Abbott & Thomas L. Griffiths - 2018 - Cognitive Science 42 (8):2648-2669.
    Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real‐world stimuli that can potentially be leveraged to capture psychological representations. We find that state‐of‐the‐art object classification networks provide surprisingly accurate predictions of human similarity judgments for (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Solving Bongard Problems With a Visual Language and Pragmatic Constraints.Stefan Depeweg, Contantin A. Rothkopf & Frank Jäkel - 2024 - Cognitive Science 48 (5):e13432.
    More than 50 years ago, Bongard introduced 100 visual concept learning problems as a challenge for artificial vision systems. These problems are now known as Bongard problems. Although they are well known in cognitive science and artificial intelligence, only very little progress has been made toward building systems that can solve a substantial subset of them. In the system presented here, visual features are extracted through image processing and then translated into a symbolic visual vocabulary. We introduce a formal language (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Empirical evidence for perspectival similarity.Jorge Morales & Chaz Firestone - 2023 - Psychological Review 1 (1):311-320.
    When a circular coin is rotated in depth, is there any sense in which it comes to resemble an ellipse? While this question is at the center of a rich and divided philosophical tradition (with some scholars answering affirmatively and some negatively), Morales et al. (2020, 2021) took an empirical approach, reporting 10 experiments whose results favor such perspectival similarity. Recently, Burge and Burge (2022) offered a vigorous critique of this work, objecting to its approach and conclusions on both philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Implications of capacity-limited, generative models for human vision.Joseph Scott German & Robert A. Jacobs - 2023 - Behavioral and Brain Sciences 46:e391.
    Although discriminative deep neural networks are currently dominant in cognitive modeling, we suggest that capacity-limited, generative models are a promising avenue for future work. Generative models tend to learn both local and global features of stimuli and, when properly constrained, can learn componential representations and response biases found in people's behaviors.
    Download  
     
    Export citation  
     
    Bookmark