Switch to: References

Add citations

You must login to add citations.
  1. Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Fodor on imagistic mental representations.Daniel C. Burnston - 2020 - Rivista Internazionale di Filosofia e Psicologia 11 (1):71-94.
    : Fodor’s view of the mind is thoroughly computational. This means that the basic kind of mental entity is a “discursive” mental representation and operations over this kind of mental representation have broad architectural scope, extending out to the edges of perception and the motor system. However, in multiple epochs of his work, Fodor attempted to define a functional role for non-discursive, imagistic representation. I describe and critique his two considered proposals. The first view says that images play a particular (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Indexical AI.Leif Weatherby & Brian Justie - 2022 - Critical Inquiry 48 (2):381-415.
    This article argues that the algorithms known as neural nets underlie a new form of artificial intelligence that we call indexical AI. Contrasting with the once dominant symbolic AI, large-scale learning systems have become a semiotic infrastructure underlying global capitalism. Their achievements are based on a digital version of the sign-function index, which points rather than describes. As these algorithms spread to parse the increasingly heavy data volumes on platforms, it becomes harder to remain skeptical of their results. We call (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence.Catherine Stinson - 2020 - Philosophy of Science 87 (4):590-611.
    There is a vast literature within philosophy of mind that focuses on artificial intelligence, but hardly mentions methodological questions. There is also a growing body of work in philosophy of science about modeling methodology that hardly mentions examples from cognitive science. Here these discussions are connected. Insights developed in the philosophy of science literature about the importance of idealization provide a way of understanding the neural implausibility of connectionist networks. Insights from neurocognitive science illuminate how relevant similarities between models and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - forthcoming - Erkenntnis:1-18.
    Some machine learning models, in particular deep neural networks, are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, contra Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Material perception for philosophers.J. Brendan Ritchie, Vivian C. Paulun, Katherine R. Storrs & Roland W. Fleming - 2021 - Philosophy Compass 16 (10):e12777.
    Common everyday materials such as textiles, foodstuffs, soil or skin can have complex, mutable and varied appearances. Under typical viewing conditions, most observers can visually recognize materials effortlessly, and determine many of their properties without touching them. Visual material perception raises many fascinating questions for vision researchers, neuroscientists and philosophers, yet has received little attention compared to the perception of color or shape. Here we discuss some of the challenges that material perception raises and argue that further philosophical thought should (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The physics of representation.Russell A. Poldrack - 2020 - Synthese 199 (1-2):1307-1325.
    The concept of “representation” is used broadly and uncontroversially throughout neuroscience, in contrast to its highly controversial status within the philosophy of mind and cognitive science. In this paper I first discuss the way that the term is used within neuroscience, in particular describing the strategies by which representations are characterized empirically. I then relate the concept of representation within neuroscience to one that has developed within the field of machine learning. I argue that the recent success of artificial neural (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • First-Class and Coach-Class Knowledge.Spencer Paulson - 2023 - Episteme 20 (3):736-756.
    I will discuss a variety of cases such that the subject's believing truly is somewhat of an accident, but less so than in a Gettier case. In each case, this is because her reasons are not ultimately undefeated full stop, but they are ultimately undefeated with certain qualifications. For example, the subject's reasons might be ultimately defeated considered in themselves but ultimately undefeated considered as a proper part of an inference to the best explanation that is undefeated without qualification. In (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Correspondence Theory of Semantic Information.Marcin Miłkowski - 2023 - British Journal for the Philosophy of Science 74 (2):485-510.
    A novel account of semantic information is proposed. The gist is that structural correspondence, analysed in terms of similarity, underlies an important kind of semantic information. In contrast to extant accounts of semantic information, it does not rely on correlation, covariation, causation, natural laws, or logical inference. Instead, it relies on structural similarity, defined in terms of correspondence between classifications of tokens into types. This account elucidates many existing uses of the notion of information, for example, in the context of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The State Space of Artificial Intelligence.Holger Lyre - 2020 - Minds and Machines 30 (3):325-347.
    The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Throwing light on black boxes: emergence of visual categories from deep learning.Ezequiel López-Rubio - 2020 - Synthese 198 (10):10021-10041.
    One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Naturalization without associationist reduction: a brief rebuttal to Yoshimi.Jesse Lopes - forthcoming - Phenomenology and the Cognitive Sciences:1-9.
    Yoshimi has attempted to defuse my argument concerning the identification of network abstraction with empiricist abstraction - thus entailing psychologism - by claiming that the argument does not generalize from the example of simple feed-forward networks. I show that the particular details of networks are logically irrelevant to the nature of the abstractive process they employ. This is ultimately because deep artificial neural networks (ANNs) and dynamical systems theory applied to the mind (DST) are both associationisms - that is, empiricist (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?Jesse Lopes - 2023 - Minds and Machines 33 (3):507-524.
    The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Language and embodiment—Or the cognitive benefits of abstract representations.Nikola A. Kompa - 2019 - Mind and Language 36 (1):27-47.
    Cognition, it is often heard nowadays, is embodied. My concern is with embodied accounts of language comprehension. First, the basic idea will be outlined and some of the evidence that has been put forward in their favor will be examined. Second, their empiricist heritage and their conception of abstract ideas will be discussed. Third, an objection will be raised according to which embodied accounts underestimate the cognitive functions language fulfills. The remainder of the paper will be devoted to arguing for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Abstract (Non-embodied) Linguistic Representations Augment Cognitive Control.Nikola A. Kompa & Jutta L. Mueller - 2020 - Frontiers in Psychology 11.
    Recent scholarship emphasizes the scaffolding role of language for cognition. Language, it is claimed, is a cognition-enhancing niche (Clark, 2006), a programming tool for cognition (Lupyan and Bergen, 2016), even a neuroenhancement (Dove, 2019), and augments cognitive functions such as memory, categorization, cognitive control as well as meta-cognitive abilities (‘thinking about thinking’). Yet the notion that language enhances or augments cognition does not fit in with embodied approaches to language processing, or so we will argue. Accounts aiming to explain how (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mapping representational mechanisms with deep neural networks.Phillip Hintikka Kieval - 2022 - Synthese 200 (3):1-25.
    The predominance of machine learning based techniques in cognitive neuroscience raises a host of philosophical and methodological concerns. Given the messiness of neural activity, modellers must make choices about how to structure their raw data to make inferences about encoded representations. This leads to a set of standard methodological assumptions about when abstraction is appropriate in neuroscientific practice. Yet, when made uncritically these choices threaten to bias conclusions about phenomena drawn from data. Contact between the practices of multivariate pattern analysis (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Insightful artificial intelligence.Marta Halina - 2021 - Mind and Language 36 (2):315-329.
    In March 2016, DeepMind's computer programme AlphaGo surprised the world by defeating the world‐champion Go player, Lee Sedol. AlphaGo exhibits a novel, surprising and valuable style of play and has been recognised as “creative” by the artificial intelligence (AI) and Go communities. This article examines whether AlphaGo engages in creative problem solving according to the standards of comparative psychology. I argue that AlphaGo displays one important aspect of creative problem solving (namely mental scenario building in the form of Monte Carlo (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Exploring Minds: Modes of Modeling and Simulation in Artificial Intelligence.Hajo Greif - 2021 - Perspectives on Science 29 (4):409-435.
    The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Exploring Minds: Modes of Modelling and Simulation in Artificial Intelligence.Hajo Greif - 2021 - Perspectives on Science 29 (4):409-435.
    -/- The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Decentring the discoverer: how AI helps us rethink scientific discovery.Elinor Clark & Donal Khosrowi - 2022 - Synthese 200 (6):1-26.
    This paper investigates how intuitions about scientific discovery using artificial intelligence can be used to improve our understanding of scientific discovery more generally. Traditional accounts of discovery have been agent-centred: they place emphasis on identifying a specific agent who is responsible for conducting all, or at least the important part, of a discovery process. We argue that these accounts experience difficulties capturing scientific discovery involving AI and that similar issues arise for human discovery. We propose an alternative, collective-centred view as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Prediction versus understanding in computationally enhanced neuroscience.Mazviita Chirimuuta - 2020 - Synthese 199 (1-2):767-790.
    The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Empiricism in the foundations of cognition.Timothy Childers, Juraj Hvorecký & Ondrej Majer - 2023 - AI and Society 38 (1):67-87.
    This paper traces the empiricist program from early debates between nativism and behaviorism within philosophy, through debates about early connectionist approaches within the cognitive sciences, and up to their recent iterations within the domain of deep learning. We demonstrate how current debates on the nature of cognition via deep network architecture echo some of the core issues from the Chomsky/Quine debate and investigate the strength of support offered by these various lines of research to the empiricist standpoint. Referencing literature from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Curious Case of Connectionism.Istvan S. N. Berkeley - 2019 - Open Philosophy 2 (1):190-205.
    Connectionist research first emerged in the 1940s. The first phase of connectionism attracted a certain amount of media attention, but scant philosophical interest. The phase came to an abrupt halt, due to the efforts of Minsky and Papert (1969), when they argued for the intrinsic limitations of the approach. In the mid-1980s connectionism saw a resurgence. This marked the beginning of the second phase of connectionist research. This phase did attract considerable philosophical attention. It was of philosophical interest, as it (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Constructing Embodied Emotion with Language: Moebius Syndrome and Face-Based Emotion Recognition Revisited.Hunter Gentry - forthcoming - Australasian Journal of Philosophy.
    Some embodied theories of concepts state that concepts are represented in a sensorimotor manner, typically via simulation in sensorimotor cortices. Fred Adams (2010) has advanced an empirical argument against embodied concepts reasoning as follows. If concepts are embodied, then patients with certain sensorimotor impairments should perform worse on categorization tasks involving those concepts. Adams cites a study with Moebius Syndrome patients that shows typical categorization performance in face-based emotion recognition. Adams concludes that their typical performance shows that embodiment is false. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Naturalism Meets the Personal Level: How Mixed Modelling Flattens the Mind.Robert D. Rupert - manuscript
    In this essay, it is argued that naturalism of an even moderate sort speaks strongly against a certain widely held thesis about the human mental (and cognitive) architecture: that it is divided into two distinct levels, the personal and the subpersonal, about the former of which we gain knowledge in a manner that effectively insulates such knowledge from the results of scientific research. -/- An empirically motivated alternative is proposed, according to which the architecture is, so to speak, flattened from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Experience-Based Intuitions.Tiffany Zhu - unknown
    In this thesis, I argue that many identification intuitions, such as one that helps you identify the authorship of a painting you are seeing for the first time, fall under the class of experience-based intuitions. Such identification intuitions cannot arise without intuition generating systems (IGSs) that are shaped by experiences accumulated during one’s life. On my view, experience-based intuitions are produced by domain-general learning systems of hierarchical abstraction which may be modeled by deep convolutional neural networks. Owing to the mechanism (...)
    Download  
     
    Export citation  
     
    Bookmark