Contents
5 found
Order:
  1. Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think.Mir H. S. Quadri - 2024 - Arkinfo Notes.
    The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. The Exploratory Status of Postconnectionist Models.Miljana Milojevic & Vanja Subotić - 2020 - Theoria: Beograd 2 (63):135-164.
    This paper aims to offer a new view of the role of connectionist models in the study of human cognition through the conceptualization of the history of connectionism – from the simplest perceptrons to convolutional neural nets based on deep learning techniques, as well as through the interpretation of criticism coming from symbolic cognitive science. Namely, the connectionist approach in cognitive science was the target of sharp criticism from the symbolists, which on several occasions caused its marginalization and almost complete (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. How a mind works. I, II, III.David A. Booth - 2013 - ResearchGate Personal Profile.
    Abstract (for the combined three Parts) This paper presents the simplest known theory of processes involved in a person’s unconscious and conscious achievements such as intending, perceiving, reacting and thinking. The basic principle is that an individual has mental states which possess quantitative causal powers and are susceptible to influences from other mental states. Mental performance discriminates the present level of a situational feature from its level in an individually acquired, multiple featured norm (exemplar, template, standard). The effect on output (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. How a neural net grows symbols.James Franklin - 1996 - In Peter Bartlett (ed.), Proceedings of the Seventh Australian Conference on Neural Networks, Canberra. ACNN '96. pp. 91-96.
    Brains, unlike artificial neural nets, use symbols to summarise and reason about perceptual input. But unlike symbolic AI, they “ground” the symbols in the data: the symbols have meaning in terms of data, not just meaning imposed by the outside user. If neural nets could be made to grow their own symbols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as to combine the good features (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations