Switch to: References

Add citations

You must login to add citations.
  1. The computational theory of mind.Steven Horst - 2005 - Stanford Encyclopedia of Philosophy.
    Over the past thirty years, it is been common to hear the mind likened to a digital computer. This essay is concerned with a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of “computer” to be developed), and that thought literally is a kind of computation. This view—which will be called the “Computational Theory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind with computation, (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders.Edmund Terem Ugar & Ntsumi Malele - forthcoming - Journal of Medical Ethics.
    Recently clinicians have become more reliant on technologies such as artificial intelligence (AI) and machine learning (ML) for effective and accurate diagnosis and prognosis of diseases, especially mental health disorders. These remarks, however, apply primarily to Europe, the USA, China and other technologically developed nations. Africa is yet to leverage the potential applications of AI and ML within the medical space. Sub-Saharan African countries are currently disadvantaged economically and infrastructure-wise. Yet precisely, these circumstances create significant opportunities for the deployment of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • ML interpretability: Simple isn't easy.Tim Räz - 2024 - Studies in History and Philosophy of Science Part A 103 (C):159-167.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Dynamicist Landscape.David L. Barack - 2023 - Topics in Cognitive Science.
    The dynamical hypothesis states that cognitive systems are dynamical systems. While dynamical systems play an important role in many cognitive phenomena, the dynamical hypothesis as stated applies to every system and so fails both to specify what makes cognitive systems distinct and to distinguish between proposals regarding the nature of cognitive systems. To avoid this problem, I distinguish several different types of dynamical systems, outlining four dimensions along which dynamical systems can vary: total-state versus partial-state, internal versus external, macroscopic versus (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An Alternative to Cognitivism: Computational Phenomenology for Deep Learning.Pierre Beckmann, Guillaume Köstner & Inês Hipólito - 2023 - Minds and Machines 33 (3):397-427.
    We propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophers Ought to Develop, Theorize About, and Use Philosophically Relevant AI.Graham Clay & Caleb Ontiveros - 2023 - Metaphilosophy 54 (4):463-479.
    The transformative power of artificial intelligence (AI) is coming to philosophy—the only question is the degree to which philosophers will harness it. In this paper, we argue that the application of AI tools to philosophy could have an impact on the field comparable to the advent of writing, and that it is likely that philosophical progress will significantly increase as a consequence of AI. The role of philosophers in this story is not merely to use AI but also to help (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evidence, computation and AI: why evidence is not just in the head.Darrell P. Rowbottom, André Curtis-Trudel & William Peden - 2023 - Asian Journal of Philosophy 2 (1):1-17.
    Can scientific evidence outstretch what scientists have mentally entertained, or could ever entertain? This article focuses on the plausibility and consequences of an affirmative answer in a special case. Specifically, it discusses how we may treat automated scientific data-gathering systems—especially AI systems used to make predictions or to generate novel theories—from the point of view of confirmation theory. It uses AlphaFold2 as a case study.
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Est-ce que Vous Compute?Arianna Falbo & Travis LaCroix - 2022 - Feminist Philosophy Quarterly 8 (3).
    Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appearance in response to a perceived change in our social environment. We defend the need to investigate cultural code-switching capacities in artificial intelligence systems. We explore a series of ethical and epistemic issues that arise when bringing cultural code-switching to bear on artificial intelligence. Building upon Dotson’s (2014) analysis of testimonial smothering, we discuss how emerging technologies in AI can give rise to epistemic oppression, and specifically, a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Eliminativism and Reading One's Own Mind.T. Parent - manuscript
    Some contemporary philosophers suggest that we know just by introspection that folk psychological states exist. However, such an "armchair refutation" of eliminativism seems too easy. I first attack two strategems, inspired by Descartes, on how such a refutation might proceed. However, I concede that the Cartesian intuition that we have direct knowledge of representational states is very powerful. The rest of this paper then offers an error theory of how that intuition might really be mistaken. The idea is that introspection (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Troubles with mathematical contents.Marco Facchin - forthcoming - Philosophical Psychology.
    To account for the explanatory role representations play in cognitive science, Egan’s deflationary account introduces a distinction between cognitive and mathematical contents. According to that account, only the latter are genuine explanatory posits of cognitive-scientific theories, as they represent the arguments and values cognitive devices need to represent to compute. Here, I argue that the deflationary account suffers from two important problems, whose roots trace back to the introduction of mathematical contents. First, I will argue that mathematical contents do not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Phenomenology: What’s AI got to do with it?Alessandra Buccella & Alison A. Springle - 2023 - Phenomenology and the Cognitive Sciences 22 (3):621-636.
    Nowadays, philosophers and scientists tend to agree that, even though human and artificial intelligence work quite differently, they can still illuminate aspects of each other, and knowledge in one domain can inspire progress in the other. For instance, the notion of “artificial” or “synthetic” phenomenology has been gaining some traction in recent AI research. In this paper, we ask the question: what (if anything) is the use of thinking about phenomenology in the context of AI, and in particular machine learning? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Metaphysics , Meaning, and Morality: A Theological Reflection on A.I.Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):157-181.
    Theologians often reflect on the ethical uses and impacts of artificial intelligence, but when it comes to artificial intelligence techniques themselves, some have questioned whether much exists to discuss in the first place. If the significance of computational operations is attributed rather than intrinsic, what are we to say about them? Ancient thinkers—namely Augustine of Hippo (lived 354–430)—break the impasse, enabling us to draw forth the moral and metaphysical significance of current developments like the “deep neural networks” that are responsible (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reinforcement learning: A brief guide for philosophers of mind.Julia Haas - 2022 - Philosophy Compass 17 (9):e12865.
    I argue for the role of reinforcement learning in the philosophy of mind. To start, I make several assumptions about the nature of reinforcement learning and its instantiation in minds like ours. I then review some of the contributions of reinforcement learning methods have made across the so-called 'decision sciences.' Finally, I show how principles from reinforcement learning can shape philosophical debates regarding the nature of perception and characterisations of desire.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Machine Learning Application to Predict The Quality of Watermelon Using JustNN.Ibrahim M. Nasser - 2019 - International Journal of Engineering and Information Systems (IJEAIS) 3 (10):1-8.
    In this paper, a predictive artificial neural network (ANN) model was developed and validated for the purpose of prediction whether a watermelon is good or bad, the model was developed using JUSTNN software environment. Prediction is done based on some watermelon attributes that are chosen to be input data to the ANN. Attributes like color, density, sugar rate, and some others. The model went through multiple learning-validation cycles until the error is zero, so the model is 100% percent accurate for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are machines radically contextualist?Ryan M. Nefdt - 2023 - Mind and Language 38 (3):750-771.
    In this article, I describe a novel position on the semantics of artificial intelligence. I present a problem for the current artificial neural networks used in machine learning, specifically with relation to natural language tasks. I then propose that from a metasemantic level, meaning in machines can best be interpreted as radically contextualist. Finally, I consider what this might mean for human‐level semantic competence from a comparative perspective.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The State Space of Artificial Intelligence.Holger Lyre - 2020 - Minds and Machines 30 (3):325-347.
    The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reinforcement learning: A brief guide for philosophers of mind.Julia Haas - 2022 - Philosophy Compass 17 (9):e12865.
    In this opinionated review, I draw attention to some of the contributions reinforcement learning can make to questions in the philosophy of mind. In particular, I highlight reinforcement learning's foundational emphasis on the role of reward in agent learning, and canvass two ways in which the framework may advance our understanding of perception and motivation.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Radical empiricism and machine learning research.Judea Pearl - 2021 - Journal of Causal Inference 9 (1):78-82.
    I contrast the “data fitting” vs “data interpreting” approaches to data science along three dimensions: Expediency, Transparency, and Explainability. “Data fitting” is driven by the faith that the secret to rational decisions lies in the data itself. In contrast, the data-interpreting school views data, not as a sole source of knowledge but as an auxiliary means for interpreting reality, and “reality” stands for the processes that generate the data. I argue for restoring balance to data science through a task-dependent symbiosis (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • 深層学習の哲学的意義.Takayuki Suzuki - 2021 - Kagaku Tetsugaku 53 (2):151-167.
    Download  
     
    Export citation  
     
    Bookmark  
  • Throwing light on black boxes: emergence of visual categories from deep learning.Ezequiel López-Rubio - 2020 - Synthese 198 (10):10021-10041.
    One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ontology, neural networks, and the social sciences.David Strohmaier - 2020 - Synthese 199 (1-2):4775-4794.
    The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks have made their way into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Recognizing why vision is inferential.J. Brendan Ritchie - 2022 - Synthese 200 (1):1-27.
    A theoretical pillars of vision science in the information-processing tradition is that perception involves unconscious inference. The classic support for this claim is that, since retinal inputs underdetermine their distal causes, visual perception must be the conclusion of a process that starts with premises representing both the sensory input and previous knowledge about the visible world. Focus on this “argument from underdetermination” gives the impression that, if it fails, there is little reason to think that visual processing involves unconscious inference. (...)
    Download  
     
    Export citation  
     
    Bookmark