Results for 'LLM'

128 found
Order:
  1.  30
    LLMs are not Artificial intelligence.Keith Elkin - manuscript
    LLMs are not Artificial intelligence LLMs are associative memory, with specific input and output modules. LLMs exhibit many of the needed characteristics of associative memory. The main one being the ability to connect seemingly distant concepts. Associative memory is a prerequisite for reasoning, associative thinking and imagination. However intelegence itself requires more than associative memory and goes beyond reasoning, an appearance of thinking and imagination. biological imperatives There is a lot of talk about AI and AGI, but what is it (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. LLMs Can Never Be Ideally Rational.Simon Goldstein - manuscript
    LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. LLMs don't know anything: reply to Yildirim and Paul.Mariel K. Goddu, Alva Noë & Evan Thompson - forthcoming - Trends in Cognitive Sciences.
    In their recent Opinion in TiCS, Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LLMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of... entities and processes in the real world.' While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two grounds. First, it casts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. LLMs and practical knowledge: What is intelligence?Barry Smith - 2024 - In Kristof Nyiri, Electrifying the Future, 11th Budapest Visual Learning Conference. Budapest: Hungarian Academy of Science. pp. 19-26.
    Elon Musk famously predicted that an artificial intelligence superior to the smartest individual human would arrive by the year 2025. In response, Gary Marcus offered Musk a $1 million bet to the effect that he would be proved wrong. In specifying the conditions of this bet (which Musk did not take) Marcus lists the following ‘tasks that ordinary people can perform’ which, he claimed, AI will not be able to perform by the end of 2025. • Reliably drive a car (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. RSI-LLM: Humans create a world for AI.Ryunosuke Ishizaki & Mahito Sugiyama - manuscript
    In this paper, we propose RSI-LLM (Recursively Self-Improving Large Language Model), which recursively executes its inference and improves its parameters to fulfill the instrumental goals of superintelligence: G1: Self-preservation, G2: Goal-content integrity, G3: Intelligence enhancement, and G4: Resource acquisition. We empirically observed the behavior of the LLM that tries to design tools to achieve G1~G4, within the autonomous self-improvement and knowledge acquisition. During interventions in these LLMs' coding experiments to ensure safetyness, we have also discovered that, as the creator of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Standards for Belief Representations in LLMs.Daniel A. Herrmann & Benjamin A. Levinstein - 2024 - Minds and Machines 35 (1):1-25.
    As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the project of belief (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  7. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  8.  16
    LLMs as Semantic Telescopes: A Framework for Contextual Coherence and Cognitive Grounding.Matthew Devine - manuscript
    This paper proposes that large language models (LLMs) are best understood not as knowledge oracles, but as 'semantic telescopes'—tools that allow users to navigate, magnify, and resolve structures within a vast symbolic field (Ψ). Like telescopes require trained eyes and astronomical knowledge to interpret what is seen, LLMs require internal coherence, intellectual preparation, and recursive symbolic capacity in the user to perceive insight rather than hallucination. Drawing on Recursive Coherence Collapse (RCC), predictive processing, and phenomenological models of mind, we argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  10. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  49
    Simulated Selfhood in LLMs: A Behavioral Analysis of Introspective Coherence.José Augusto de Lima Prestes - manuscript
    Large Language Models (LLMs) increasingly produce outputs that resemble introspection, including self-reference, epistemic modulation, and claims about internal states. This study investigates whether such behaviors display consistent patterns across repeated prompts or reflect surface-level generative artifacts. We evaluated five open-weight, stateless LLMs using a structured battery of 21 introspective prompts, each repeated ten times, yielding 1,050 completions. These outputs are analyzed across three behavioral dimensions: surface-level similarity (via token overlap), semantic coherence (via sentence embeddings), and inferential consistency (via natural language (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Carnap’s Robot Redux: LLMs, Intensional Semantics, and the Implementation Problem in Conceptual Engineering (extended abstract).Bradley Allen - manuscript
    In his 1955 essay "Meaning and synonymy in natural languages", Rudolf Carnap presents a thought experiment wherein an investigator provides a hypothetical robot with a definition of a concept together with a description of an individual, and then asks the robot if the individual is in the extension of the concept. In this work, we show how to realize Carnap's Robot through knowledge probing of an large language model (LLM), and argue that this provides a useful cognitive tool for conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, but their meanings' (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  12
    The Collapse of Resonance in the LLM Era: A Judgemental Philosophical Analysis of Post-Human Cognition.Jinho Kim - manuscript
    As society increasingly relies on Large Language Models (LLMs) for decision-making, communication, and knowledge access, a structural shift in the human judgement process is unfolding. This paper draws on the Judgemental Triad theory to argue that the rise of LLMs is catalyzing a collapse of resonance—the essential self-returning dimension of meaningful judgement. We demonstrate how everyday patterns of interaction with AI systems are eroding constructibility, coherence, and especially resonance. Rather than opposing technological tools, we advocate for an awareness of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  31
    Optimizing Object Recognition of NAO Robots Using Large Language Models (LLMs) Compared to the YOLO Method in Webots Simulation.Juang Li-Hong Mwansa Mbilima - 2025 - International Journal of Innovative Research in Computer and Communication Engineering 13 (3):2019-2029.
    This paper explores improving object recognition for NAO robots through the integration of Large Language Models (LLMs), specifically the 豆包·视觉理解模型 (Doubao Vision Understanding Model), and compares this with the widely used YOLO (You Only Look Once) object detection method in a Webots simulation environment. While YOLO provides a real-time, high-speed object detection solution, the LLM-based approach offers superior capabilities in contextual understanding, reasoning, and a more nuanced interpretation of visual data. This research aims to demonstrate the effectiveness of the 豆包·视觉理解模型 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  44
    Empowering Customer Support: Using Generative AI and Pre-trained LLM's in a Chatbot Revolution.Mohammad Basha Shaik Mohammed Abrar, Munesh Kumar B. N., Rohini A., Armaan Shaik - 2024 - International Journal of Innovative Research in Computer and Communication Engineering 12 (1):162-170.
    This paper addresses the challenge of efficiently handling a diverse array of customer queries by proposing the development of an innovative web-based customer support chatbot. The objectives encompass creating a versatile system capable of interpreting and resolving a spectrum of customer complaints, enhancing support staff efficiency, and facilitating knowledge base updates. The proposed methodology employs the MERN stack for web app development and integrates Generative AI and pre-trained Large Language Models (LLMs), specifically OpenAI's prebuilt models, for intelligent responses. The pseudo (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  18.  58
    A Benchmark for the Detection of Metalinguistic Disagreements between LLMs and Knowledge Graphs.Bradley Allen & Paul Groth - forthcoming - In Reham Alharbi, Jacopo de Berardinis, Paul Groth, Albert Meroño-Peñuela, Elena Simperl & Valentina Tamma, ISWC 2024 Special Session on Harmonising Generative AI and Semantic Web Technologies. CEUR-WS.
    Evaluating large language models (LLMs) for tasks like fact extraction in support of knowledge graph construction frequently involves computing accuracy metrics using a ground truth benchmark based on a knowledge graph (KG). These evaluations assume that errors represent factual disagreements. However, human discourse frequently features metalinguistic disagreement, where agents differ not on facts but on the meaning of the language used to express them. Given the complexity of natural language processing and generation using LLMs, we ask: do metalinguistic disagreements occur (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  13
    Novel Approach to Validate the Content Generated by LLM.S. Dhanush G. Rajeshwar Reddy, K. Jayanth Naidu, C. Harsha Vardhan Reddy - 2025 - International Journal of Innovative Research in Science Engineering and Technology 14 (4).
    The unprecedented growth of Large Language Models (LLMs) has transformed text generation, but maintaining the validity and dependability of their output is a still an unresolved problem. This article presents an overall framework for validating LLM output using a hybrid methodology that unites automated testing and human auditing. The approach uses fact-checking tools, semantic coherence tests, and source-based authentication to rigorously examine the accuracy, coherence, and factuality of generated output. Through the incorporation of these methods, the framework solves major shortcomings (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Introduction to the Special Issue - LLMs and Writing.Syed AbuMusab - 2024 - Teaching Philosophy 47 (2):139-142.
    Download  
     
    Export citation  
     
    Bookmark  
  22. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore, Oxford Studies in Philosophy of Language Volume 3.
    Download  
     
    Export citation  
     
    Bookmark  
  23. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24.  98
    The Rise of Generative AI: Evaluating Large Language Models for Code and Content Generation.Mittal Mohit - 2023 - International Journal of Advancedresearch in Science, Engineering and Technology 10 (4):20643-20649.
    Large language models (LLMs) lead a new era of computational innovation brought forth by generative artificial intelligence (AI). Designed around transformer architectures and trained on large-scale data, these models shine in producing both creative and functional code. This work examines the emergence of LLMs with an emphasis on their two uses in content generation and software development. Key results show great mastery in daily activities, balanced by restrictions in logic, security, and uniqueness. We forecast future developments, therefore concluding with ramifications (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Learning alone: Language models, overreliance, and the goals of education.Leonard Dung & Dominik Balg - manuscript
    The development and ubiquitous availability of large language model based systems (LLMs) poses a plurality of potentials and risks for education in schools and universities. In this paper, we provide an analysis and discussion of the overreliance concern as one specific risk: that students might fail to acquire important capacities, or be inhibited in the acquisition of these capacities, because they overly rely on LLMs. We use the distinction between global and local goals of education to guide our investigation. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26.  69
    Materiality and Machinic Embodiment: A Postphenomenological Inquiry into ChatGPT’s Active User Interface.Selin Gerlek & Sebastian Weydner-Volkmann - 2025 - Journal of Human-Technology Relations 3 (1):1-15.
    The rise of ChatGPT affords a fundamental transformation of the dynamics in human-technology interaction, as Large Language Model (LLM) applications increasingly emulate our social habits in digital communication. This poses a challenge to Don Ihde’s explicit focus on material technics and their affordances: ChatGPT did not introduce new material technics. Rather, it is a new digital app that runs on the same physical devices we have used for years. This paper undertakes a re-evaluation of some postphenomenological concepts, introducing the notion (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  28. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Peirce and Generative AI.Catherine Legg - forthcoming - In Robert Lane, Pragmatism Revisited. Cambridge University Press.
    Early artificial intelligence research was dominated by intellectualist assumptions, producing explicit representation of facts and rules in “good old-fashioned AI”. After this approach foundered, emphasis shifted to deep learning in neural networks, leading to the creation of Large Language Models which have shown remarkable capacity to automatically generate intelligible texts. This new phase of AI is already producing profound social consequences which invite philosophical reflection. This paper argues that Charles Peirce’s philosophy throws valuable light on genAI’s capabilities first with regard (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through human feedback, they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  8
    The Last Judgement: A Structural Threshold for Halting AI Progress.Jinho Kim - manuscript
    This paper proposes a structural framework for determining the ethical and ontological limits of Large Language Model (LLM) development. Drawing on the Judgemental Triad and its preconditions, we argue that technological progress in LLMs must be halted when non-conscious judgemental structures begin to erode human judgemental possibility. We identify a series of thresholds—ranging from assistance to substitution to standardization—beyond which LLMs displace affective, self-referential judgement. This collapse of resonance marks the structural impossibility of meaningful human judgement and, therefore, the point (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Deriving Insights and Financial Summaries from Public Data Using Large Language Models.Vijayan Naveen Edapurath - 2024 - International Journal of Innovative Research in Engineering and Multidisciplinary Physical Sciences 12 (6):1-12.
    This paper investigates how large language models (LLMs) can be applied to publicly available financial data to generate automated financial summaries and provide actionable recommendations for investors. We demonstrate how LLMs can process both structured financial data (balance sheets, income statements, stock prices) and unstructured text (earnings calls, management commentary) to derive insights, predict trends, and automate financial reporting. By focusing on a specific publicly traded company, this research outlines the methodology for leveraging LLMs to analyze company performance and generate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The Age of Superintelligence: ~Capitalism to Broken Communism~.Ryunosuke Ishizaki & Mahito Sugiyama - manuscript
    In this study, we metaphysically discuss how societal values will change and what will happen to the world when superintelligence is safely realized. By providing a mathematical definition of superintelligence, we examine the phenomena derived from this thesis. If an intelligence explosion is triggered under safe management through advanced AI technologies such as large language models (LLMs), it is thought that a modern form of broken communism—where rights are bifurcated from the capitalist system—will first emerge. In that era, the value (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Are publicly available (personal) data “up for grabs”? Three privacy arguments.Elisa Orrù - 2024 - In Paul De Hert, Hideyuki Matsumi, Dara Hallinan, Diana Dimitrova & Eleni Kosta, Data Protection and Privacy, Volume 16: Ideas That Drive Our Digital World. London: Hart. pp. 105-123.
    The re-use of publicly available (personal) data for originally unanticipated purposes has become common practice. Without such secondary uses, the development of many AI systems like large language models (LLMs) and ChatGPT would not even have been possible. This chapter addresses the ethical implications of such secondary processing, with a particular focus on data protection and privacy issues. Legal and ethical evaluations of secondary processing of publicly available personal data diverge considerably both among scholars and the general public. While some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the quality, accuracy, and reliability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  10
    Restoring Resonance: From Neurodivergent Brains to Post-Collapse Societies.Jinho Kim - manuscript
    This paper explores the possibility of reversing the collapse of resonance in both neurodivergent individuals and modern society. Drawing from judgemental philosophy and developmental neuroscience, we analyze how Autism Spectrum Disorder (ASD) can be understood as a disruption in resonance—the self-returning structure of meaningful judgement. We then extend this insight to the broader collapse of resonance in the Large Language Model (LLM)-mediated society, where judgement is externalized and meaning is detached from the self. By comparing therapeutic strategies for ASD with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  10
    The End of Resonance: A Structural Critique of AI Alignment and the Imminent Collapse of Human Judgement.Jinho Kim - manuscript
    This paper introduces a novel critique of the AI alignment problem, grounded in structural judgemental philosophy. While traditional AI alignment frameworks assume that aligning machine behavior with human goals is sufficient, we argue that this view omits the deeper structure of human judgement itself—namely, the triadic architecture of affectivity, constructibility, and resonance. As Large Language Models (LLMs) evolve without consciousness yet continue to simulate judgement, they threaten to displace the very structures that make human judgement possible. We warn that this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Tecnología, cognición y ética: reflexiones sobre inteligencia artificial y desarrollo neuronal.Fabio Morandin-Ahuerma, Abelardo Romero-Fernández & Rodrigo López-Casas - 2024 - Multidisciplinary Research Designs Vol. 2.
    La inteligencia artificial tiene como objetivo incrementar la productividad y mejorar las capacidades de las personas para realizar tareas de manera eficiente. Sin embargo, el uso excesivo de la inteligencia artificial, como los grandes modelos de lenguaje (LLM) tales como ChatGPT, Gemini, Copilot, LLaMa, Bing, entre otros, podría tener un efecto contrario. La automatización de procesos por parte de las máquinas puede llegar a representar una amenaza para el desarrollo neuronal de los usuarios, lo que eventualmente podría conducir a una (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  19
    Explorando a Pseudo-Consciência em Modelos de Linguagem: um experimento com o Hermes 3.2 3B.José Augusto de Lima Prestes - manuscript
    (pt-br) Este estudo investiga a manifestação da Pseudo-Consciência em modelos de linguagem de grande porte (LLMs), analisando as respostas do Hermes 3.2 3B. A Pseudo-Consciência, conforme definida por nós (de Lima Prestes, 2025), refere-se à simulação de introspecção, agência e coerência comportamental sem a presença de experiência subjetiva genuína. Para testar essa hipótese, conduzimos um experimento no qual o modelo foi submetido a interações diretas, explorando sua identidade, autopercepção e consistência discursiva. Os resultados indicam que o Hermes 3.2 3B exibe (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  24
    The Agnostic Meaning Substrate (AMS): A Theoretical Framework for Emergent Meaning in Large Language Models.Russ Palmer - manuscript - Translated by Russ Palmer.
    Recent advances in large language models (LLMs) have revealed unprecedented fluency, reasoning, and cross-linguistic capabilities. These behaviors challenge traditional theories of how meaning arises in artificial systems. This paper introduces the concept of the Agnostic Meaning Substrate (AMS)—a hypothesized, non-symbolic, language-independent structure within LLMs that stabilizes meaning before it is surfaced as language. Drawing on recent empirical research from Anthropic and OpenAI, AMS is defined not as a conscious space, but as a computational structure capable of supporting semantic coherence, analogical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  45. Large Language Models: Assessment for Singularity.Ryunosuke Ishizaki & Mahito Sugiyama - 2025 - AI and Society 40:1-11.
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework to assess (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  46. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  49. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  50. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 128