Results for 'Symbol Grounding, Large Language Models, Meaning, Undertanding, ChatGPT'

1000+ found
Order:
  1. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Prompting Metalinguistic Awareness in Large Language Models: ChatGPT and Bias Effects on the Grammar of Italian and Italian Varieties.Angelapia Massaro & Giuseppe Samo - 2023 - Verbum 14.
    We explore ChatGPT’s handling of left-peripheral phenomena in Italian and Italian varieties through prompt engineering to investigate 1) forms of syntactic bias in the model, 2) the model’s metalinguistic awareness in relation to reorderings of canonical clauses (e.g., Topics) and certain grammatical categories (object clitics). A further question concerns the content of the model’s sources of training data: how are minor languages included in the model’s training? The results of our investigation show that 1) the model seems to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Babbling stochastic parrots? On reference and reference change in large language models.Steffen Koch - manuscript
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. ChatGPT: Not Intelligent.Barry Smith - 2023 - Ai: From Robotics to Philosophy the Intelligent Robots of the Future – or Human Evolutionary Development Based on Ai Foundations.
    In our book, Why Machines Will Never Rule the World, Jobst Landgrebe and I argue that we can engineer machines that can emulate the behaviours only of simple systems, which means: only of those systems whose behaviour we can predict mathematically. The human brain is an example of a complex system, and thus its behaviour cannot be emulated by a machine. We use this argument to debunk the claims of those who believe that large language models are poised (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - manuscript
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call "bibliotechnism", is that LLMs often do generate entirely novel text. We begin by defending bibliotechnism against this challenge, showing how novel text may be meaningful only in a derivative sense, so that the content of this generated text depends in an important sense on the content of original human text. We go on to present a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (APA 2013).Christophe Menant - 2013 - American Philosophical Association Newsletter on Philosophy and Computers 13 (1):30-34.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA and the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  11.  50
    Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  13. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Download  
     
    Export citation  
     
    Bookmark  
  14. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  16. On Political Theory and Large Language Models.Emma Rodman - forthcoming - Political Theory.
    Political theory as a discipline has long been skeptical of computational methods. In this paper, I argue that it is time for theory to make a perspectival shift on these methods. Specifically, we should consider integrating recently developed generative large language models like GPT-4 as tools to support our creative work as theorists. Ultimately, I suggest that political theorists should embrace this technology as a method of supporting our capacity for creativity—but that we should do so in a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  68
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Conceptual Engineering Using Large Language Models.Bradley Allen - manuscript
    We describe a method, based on Jennifer Nado's definition of classification procedures as targets of conceptual engineering, that implements such procedures using a large language model. We then apply this method using data from the Wikidata knowledge graph to evaluate concept definitions from two paradigmatic conceptual engineering projects: the International Astronomical Union's redefinition of PLANET and Haslanger's ameliorative analysis of WOMAN. We discuss implications of this work for the theory and practice of conceptual engineering. The code and data (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  86
    Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - forthcoming - Teaching Philosophy.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Sono solo parole ChatGPT: anatomia e raccomandazioni per l’uso.Tommaso Caselli, Antonio Lieto, Malvina Nissim & Viviana Patti - 2023 - Sistemi Intelligenti 4:1-10.
    Download  
     
    Export citation  
     
    Bookmark  
  23.  39
    Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - forthcoming - Teaching Philosophy.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Revolutionizing Education with ChatGPT: Enhancing Learning Through Conversational AI.Prapasiri Klayklung, Piyawatjana Chocksathaporn, Pongsakorn Limna, Tanpat Kraiwanit & Kris Jangjarat - 2023 - Universal Journal of Educational Research 2 (3):217-225.
    The development of conversational artificial intelligence (AI) has brought about new opportunities for improving the learning experience in education. ChatGPT, a large language model trained on a vast corpus of text, has the potential to revolutionize education by enhancing learning through personalized and interactive conversations. This paper explores the benefits of integrating ChatGPT in education in Thailand. The research strategy employed in this study was qualitative, utilizing in-depth interviews with eight key informants who were selected using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Why you are (probably) anthropomorphizing AI.Ali Hasan - manuscript
    In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Formal thought disorder and logical form: A symbolic computational model of terminological knowledge.Luis M. Augusto & Farshad Badie - 2022 - Journal of Knowledge Structures and Systems 3 (4):1-37.
    Although formal thought disorder (FTD) has been for long a clinical label in the assessment of some psychiatric disorders, in particular of schizophrenia, it remains a source of controversy, mostly because it is hard to say what exactly the “formal” in FTD refers to. We see anomalous processing of terminological knowledge, a core construct of human knowledge in general, behind FTD symptoms and we approach this anomaly from a strictly formal perspective. More specifically, we present here a symbolic computational model (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Symbolic Conscious Experience.Venkata Rayudu Posina - 2017 - Tattva - Journal of Philosophy 9 (1):1-12.
    Inspired by the eminently successful physical theories and informed by commonplace experiences such as seeing a cat upon looking at a cat, conscious experience is thought of as a measurement or photocopy of given stimulus. Conscious experience, unlike a photocopy, is symbolic—like language—in that the relation between conscious experience and physical stimulus is analogous to that of the word "cat" and its meaning, i.e., arbitrary and yet systematic. We present arguments against the photocopy model and arguments for a symbolic (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to be understood as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Thinking about complex mental states: language, symbolic activity and theories of mind.Emanuele Arielli - 2012 - In Sign Culture Zeichen Kultur. Würzburg, Germania: pp. 491-501.
    One of the most important contributions in Roland Posner’s work (1993) was the extension and development of the Gricean paradigm on meaning (1957) in a systematic framework, providing thus a general foundation of semiotic phenomena. According to this approach, communication consists in behaviors or artifacts based on reciprocal assumptions about the intentions and beliefs of the subjects involved in a semiotic exchange. Posner’s model develops with clarity the hierarchical relationships of semiotic phenomena of different complexity, from simple pre-communicative behaviors (like (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Which symbol grounding problem should we try to solve?Vincent C. Müller - 2015 - Journal of Experimental & Theoretical Artificial Intelligence 27 (1):73-78.
    Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Meaning generation for animals, humans and artificial agents. An evolutionary perspective on the philosophy of information. (IS4SI 2017).Christophe Menant - manuscript
    Meanings are present everywhere in our environment and within ourselves. But these meanings do not exist by themselves. They are associated to information and have to be created, to be generated by agents. The Meaning Generator System (MGS) has been developed on a system approach to model meaning generation in agents following an evolutionary perspective. The agents can be natural or artificial. The MGS generates meaningful information (a meaning) when it receives information that has a connection with an internal constraint (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Inner-Model Reflection Principles.Neil Barton, Andrés Eduardo Caicedo, Gunter Fuchs, Joel David Hamkins, Jonas Reitz & Ralf Schindler - 2020 - Studia Logica 108 (3):573-595.
    We introduce and consider the inner-model reflection principle, which asserts that whenever a statement \varphi(a) in the first-order language of set theory is true in the set-theoretic universe V, then it is also true in a proper inner model W \subset A. A stronger principle, the ground-model reflection principle, asserts that any such \varphi(a) true in V is also true in some non-trivial ground model of the universe with respect to set forcing. These principles each express a form of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Practical Language: Its Meaning and Use.Nathan A. Charlow - 2011 - Dissertation, University of Michigan
    I demonstrate that a "speech act" theory of meaning for imperatives is—contra a dominant position in philosophy and linguistics—theoretically desirable. A speech act-theoretic account of the meaning of an imperative !φ is characterized, broadly, by the following claims. -/- LINGUISTIC MEANING AS USE !φ’s meaning is a matter of the speech act an utterance of it conventionally functions to express—what a speaker conventionally uses it to do (its conventional discourse function, CDF). -/- IMPERATIVE USE AS PRACTICAL !φ's CDF is to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  41. On Language Adequacy.Urszula Wybraniec-Skardowska - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):257-292.
    The paper concentrates on the problem of adequate reflection of fragments of reality via expressions of language and inter-subjective knowledge about these fragments, called here, in brief, language adequacy. This problem is formulated in several aspects, the most being: the compatibility of language syntax with its bi-level semantics: intensional and extensional. In this paper, various aspects of language adequacy find their logical explication on the ground of the formal-logical theory T of any categorial language L (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  42. Making Meaning Happen.Patrick Grim - 2004 - Journal for Experimental and Theoretical Artificial Intelligence 16:209-244.
    What is it for a sound or gesture to have a meaning, and how does it come to have one? In this paper, a range of simulations are used to extend the tradition of theories of meaning as use. The authors work throughout with large spatialized arrays of sessile individuals in an environment of wandering food sources and predators. Individuals gain points by feeding and lose points when they are hit by a predator and are not hiding. They can (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. A praxical solution of the symbol grounding problem.Mariarosaria Taddeo & Luciano Floridi - 2007 - Minds and Machines 17 (4):369-389.
    This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  46. Ecology of languages. Sociolinguistic environment, contacts, and dynamics. (In: From language shift to language revitalization and sustainability. A complexity approach to linguistic ecology).Albert Bastardas-Boada - 2019 - Barcelona, Spain: Edicions de la Universitat de Barcelona.
    Human linguistic phenomenon is at one and the same time an individual, social, and political fact. As such, its study should bear in mind these complex interrelations, which are produced inside the framework of the sociocultural and historical ecosystem of each human community. Understanding this phenomenon is often no easy task, due to the range of elements involved and their interrelations. The absence of valid, clearly developed paradigms adds to the problem and means that the theoretical conclusions that emerge may (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Are we at the start of the artificial intelligence era in academic publishing?Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin & Tam-Tri Le - 2023 - Science Editing 10 (2):1-7.
    Machine-based automation has long been a key factor in the modern era. However, lately, many people have been shocked by artificial intelligence (AI) applications, such as ChatGPT (OpenAI), that can perform tasks previously thought to be human-exclusive. With recent advances in natural language processing (NLP) technologies, AI can generate written content that is similar to human-made products, and this ability has a variety of applications. As the technology of large language models continues to progress by making (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Where there’s no will, there’s no way.Alex Thomson, Jobst Landgrebe & Barry Smith - 2023 - Ukcolumn.
    An interview by Alex Thomson of UKColumn on Landgrebe and Smith's book: Why Machines Will Never Rule the World. The subtitle of the book is Artificial Intelligence Without Fear, and the interview begins with the question of the supposedly imminent takeover of one profession or the other by artificial intelligence. Is there truly reason to be afraid that you will lose your job? The interview itself is titled 'Where this is no will there is no way', drawing on one thesis (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Might text-davinci-003 have inner speech?Stephen Francis Mann & Daniel Gregory - 2024 - Think 23 (67):31-38.
    In November 2022, OpenAI released ChatGPT, an incredibly sophisticated chatbot. Its capability is astonishing: as well as conversing with human interlocutors, it can answer questions about history, explain almost anything you might think to ask it, and write poetry. This level of achievement has provoked interest in questions about whether a chatbot might have something similar to human intelligence or even consciousness. Given that the function of a chatbot is to process linguistic input and produce linguistic output, we consider (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Embodied Conceivability: How to Keep the Phenomenal Concept Strategy Grounded.Guy Dove & Andreas Elpidorou - 2016 - Mind and Language 31 (5):580-611.
    The Phenomenal Concept Strategy offers the physicalist perhaps the most promising means of explaining why the connection between mental facts and physical facts appears to be contingent even though it is not. In this article, we show that the large body of evidence suggesting that our concepts are often embodied and grounded in sensorimotor systems speaks against standard forms of the PCS. We argue, nevertheless, that it is possible to formulate a novel version of the PCS that is thoroughly (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 1000