Switch to: References

Citations of:

Making AI Meaningful Again

Synthese 198 (March):2061-2081 (2021)

Add citations

You must login to add citations.
  1. Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sheets, Diagrams, and Realism in Peirce.Frederik Stjernfelt - 2022 - Berlin: De Gruyter.
    This book investigates a number of central problems in the philosophy of Charles Peirce grouped around the realism of his semiotics: the issue of how sign systems are developed and used in the investigation of reality. Thus, it deals with the precise character of Peirce's realism; with Peirce's special notion of propositions as signs which, at the same time, denote and describe the same object. It deals with diagrams as signs which depict more or less abstract states-of-affairs, facilitating reasoning about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - forthcoming - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Biomedical Ontologies.Barry Smith - 2022 - In Peter L. Elkin (ed.), Terminology, Ontology and Their Implementations: Teaching Guide and Notes. Springer. pp. 125-169.
    We begin at the beginning, with an outline of Aristotle’s views on ontology and with a discussion of the influence of these views on Linnaeus. We move from there to consider the data standardization initiatives launched in the 19th century, and then turn to investigate how the idea of computational ontologies developed in the AI and knowledge representation communities in the closing decades of the 20th century. We show how aspects of this idea, particularly those relating to the use of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Understanding models understanding language.Anders Søgaard - 2022 - Synthese 200 (6):1-16.
    Landgrebe and Smith :2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Unbearable Shallow Understanding of Deep Learning.Alessio Plebe & Giorgio Grasso - 2019 - Minds and Machines 29 (4):515-553.
    This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why machines do not understand: A response to Søgaard.Jobst Landgrebe & Barry Smith - 2023 - Archiv.
    Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An argument for the impossibility of machine intelligence (preprint).Jobst Landgrebe & Barry Smith - 2021 - Arxiv.
    Since the noun phrase `artificial intelligence' (AI) was coined, it has been debated whether humans are able to create intelligence using technology. We shed new light on this question from the point of view of themodynamics and mathematics. First, we define what it is to be an agent (device) that could be the bearer of AI. Then we show that the mainstream definitions of `intelligence' proposed by Hutter and others and still accepted by the AI community are too weak even (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ontology and Cognitive Outcomes.David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James Llinas & Barry Smith - 2020 - Journal of Knowledge Structures and Systems 1 (1): 3-22.
    The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making space: The natural, cultural, cognitive and social niches of human activity.Barry Smith - 2021 - Cognitive Processing 22 (supplementary issue 1):77-87.
    This paper is in two parts. Part 1 examines the phenomenon of making space as a process involving one or other kind of legal decision-making, for example when a state authority authorizes the creation of a new highway along a certain route or the creation of a new park in a certain location. In cases such as this a new abstract spatial entity comes into existence – the route, the area set aside for the park – followed only later by (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation