Toward biologically plausible artificial vision

Behavioral and Brain Sciences 46:e290 (2023)
  Copy   BIBTEX

Abstract

Quilty-Dunn et al. argue that deep convolutional neural networks (DCNNs) optimized for image classification exemplify structural disanalogies to human vision. A different kind of artificial vision – found in reinforcement-learning agents navigating artificial three-dimensional environments – can be expected to be more human-like. Recent work suggests that language-like representations substantially improves these agents’ performance, lending some indirect support to the language-of-thought hypothesis (LoTH).

Author's Profile

Mason Westfall
Washington University in St. Louis

Analytics

Added to PP
2023-05-19

Downloads
241 (#61,171)

6 months
129 (#25,220)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?