Switch to: References

Add citations

You must login to add citations.
  1. ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - 2024 - AI and Society 39 (5):2221-2231.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Personhood and AI: Why large language models don’t understand us.Jacob Browning - 2023 - AI and Society 39 (5):2499-2506.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.Thilo Hagendorff - 2024 - Minds and Machines 34 (4):1-27.
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Basic values in artificial intelligence: comparative factor analysis in Estonia, Germany, and Sweden.Anu Masso, Anne Kaun & Colin van Noordt - 2024 - AI and Society 39 (6):2775-2790.
    Increasing attention is paid to ethical issues and values when designing and deploying artificial intelligence (AI). However, we do not know how those values are embedded in artificial artefacts or how relevant they are to the population exposed to and interacting with AI applications. Based on literature engaging with ethical principles and moral values in AI, we designed an original survey instrument, including 15 value components, to estimate the importance of these values to people in the general population. The article (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond Preferences in AI Alignment.Tan Zhi-Xuan, Micah Carroll, Matija Franklin & Hal Ashton - forthcoming - Philosophical Studies:1-51.
    The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term apreferentistapproach to AI alignment. In this paper, we characterize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Getting it right: the limits of fine-tuning large language models.Jacob Browning - 2024 - Ethics and Information Technology 26 (2):1-9.
    The surge in interest in natural language processing in artificial intelligence has led to an explosion of new language models capable of engaging in plausible language use. But ensuring these language models produce honest, helpful, and inoffensive outputs has proved difficult. In this paper, I argue problems of inappropriate content in current, autoregressive language models—such as ChatGPT and Gemini—are inescapable; merely predicting the next word is incompatible with reliably providing appropriate outputs. The various fine-tuning methods, while helpful, cannot transform the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Executioner Paradox: understanding self-referential dilemma in computational systems.Sachit Mahajan - forthcoming - AI and Society:1-8.
    As computational systems burgeon with advancing artificial intelligence (AI), the deterministic frameworks underlying them face novel challenges, especially when interfacing with self-modifying code. The Executioner Paradox, introduced herein, exemplifies such a challenge where a deterministic Executioner Machine (EM) grapples with self-aware and self-modifying code. This unveils a self-referential dilemma, highlighting a gap in current deterministic computational frameworks when faced with self-evolving code. In this article, the Executioner Paradox is proposed, highlighting the nuanced interactions between deterministic decision-making and self-aware code, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent development of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards a Conversational Ethics of Large Language Models.Hendrik Kempt, Alon Lavie & Saskia K. Nagel - 2024 - American Philosophical Quarterly 61 (4):339-354.
    Large Language Models are one of the most prominent examples of current uses of AI, and one of the most urgently pursued normative tasks is to make them and their interactive user-interfaces—open-domain chatbots—safe. However, in this paper, we elaborate first on why such a limited view on the permissibility and desirability of their utterances falls conceptually flat, is philosophically insufficient, and leads to severe technological limits. We then propose a positive normative concept, appropriateness, that can provide the required orientation for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The breakthrough of philosophy of mind in the construction of artificial intelligence concepts in Marxist philosophy.Xu Lan & Hao Shu - 2024 - Trans/Form/Ação 47 (6):e02400332.
    In the context of contemporary technology, artificial intelligence has become the focus of thinking and dialectics, reaching boundaries beyond the dimensions of technology and directly to the core of human existence and consciousness. Philosophy of mind, as an ancient and profound way of thinking, has encountered many obstacles in the construction of artificial intelligence concepts. Marxist philosophy emphasizes the determinism of matter and mode of production. From the perspective of Marxist philosophy, this study compares and analyzes the theoretical differences and (...)
    Download  
     
    Export citation  
     
    Bookmark