Contents
26 found
Order:
  1. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Beyond Interpretability and Explainability: Systematic AI and the Function of Systematizing Thought.Matthieu Queloz - manuscript
    Recent debates over artificial intelligence have focused on its perceived lack of interpretability and explainability. I argue that these notions fail to capture an important aspect of what end-users—as opposed to developers—need from these models: what is needed is systematicity, in a more demanding sense than the compositionality-related sense that has dominated discussions of systematicity in the philosophy of language and cognitive science over the last thirty years. To recover this more demanding notion of systematicity, I distinguish between (i) the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate progress (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Conversations with Chatbots.P. J. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul (eds.), Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo (eds.), Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore (eds.), Oxford Studies in Philosophy of Language Volume 3.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - forthcoming - Studies in Logic, Grammar and Rhetoric.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to control. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as consciousness, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 4.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Interdisciplinary Communication by Plausible Analogies: the Case of Buddhism and Artificial Intelligence.Michael Cooper - 2022 - Dissertation, University of South Florida
    Communicating interdisciplinary information is difficult, even when two fields are ostensibly discussing the same topic. In this work, I’ll discuss the capacity for analogical reasoning to provide a framework for developing novel judgments utilizing similarities in separate domains. I argue that analogies are best modeled after Paul Bartha’s By Parallel Reasoning, and that they can be used to create a Toulmin-style warrant that expresses a generalization. I argue that these comparisons provide insights into interdisciplinary research. In order to demonstrate this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Ethics at the Frontier of Human-AI Relationships.Henry Shevlin - manuscript
    The idea that humans might one day form persistent and dynamic relationships in professional, social, and even romantic contexts is a longstanding one. However, developments in machine learning and especially natural language processing over the last five years have led to this possibility becoming actualised at a previously unseen scale. Apps like Replika, Xiaoice, and CharacterAI boast many millions of active long-term users, and give rise to emotionally complex experiences. In this paper, I provide an overview of these developments, beginning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark