Results for 'LLMs'

79 found
Order:
  1. LLMs Can Never Be Ideally Rational.Simon Goldstein - manuscript
    LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. LLMs don't know anything: reply to Yildirim and Paul.Mariel K. Goddu, Alva Noë & Evan Thompson - forthcoming - Trends in Cognitive Sciences.
    In their recent Opinion in TiCS, Yildirim and Paul propose that large language models (LLMs) have ‘instrumental knowledge’ and possibly the kind of ‘worldly’ knowledge that humans do. They suggest that the production of appropriate outputs by LLMs is evidence that LLMs infer ‘task structure’ that may reflect ‘causal abstractions of... entities and processes in the real world.' While we agree that LLMs are impressive and potentially interesting for cognitive science, we resist this project on two (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. RSI-LLM: Humans create a world for AI.R. Ishizaki & Mahito Sugiyama - manuscript
    In this paper, we propose RSI-LLM (Recursively Self-Improving Large Language Model), which recursively executes its inference and improves its parameters to fulfill the instrumental goals of superintelligence: G1: Self-preservation, G2: Goal-content integrity, G3: Intelligence enhancement, and G4: Resource acquisition. We empirically observed the behavior of the LLM that tries to design tools to achieve G1~G4, within the autonomous self-improvement and knowledge acquisition. During interventions in these LLMs' coding experiments to ensure safetyness, we have also discovered that, as the creator (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. LLMs and practical knowledge: What is intelligence?Barry Smith - 2024 - In Kristof Nyiri (ed.), Electrifying the Future, 11th Budapest Visual Learning Conference. Budapest: Hungarian Academy of Science. pp. 19-26.
    Elon Musk famously predicted that an artificial intelligence superior to the smartest individual human would arrive by the year 2025. In response, Gary Marcus offered Musk a $1 million bet to the effect that he would be proved wrong. In specifying the conditions of this bet (which Musk did not take) Marcus lists the following ‘tasks that ordinary people can perform’ which, he claimed, AI will not be able to perform by the end of 2025. • Reliably drive a car (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Standards for Belief Representations in LLMs.Daniel A. Herrmann & Benjamin A. Levinstein - 2024 - Minds and Machines 35 (1):1-25.
    As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Carnap’s Robot Redux: LLMs, Intensional Semantics, and the Implementation Problem in Conceptual Engineering (extended abstract).Bradley Allen - manuscript
    In his 1955 essay "Meaning and synonymy in natural languages", Rudolf Carnap presents a thought experiment wherein an investigator provides a hypothetical robot with a definition of a concept together with a description of an individual, and then asks the robot if the individual is in the extension of the concept. In this work, we show how to realize Carnap's Robot through knowledge probing of an large language model (LLM), and argue that this provides a useful cognitive tool for conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  12. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore (eds.), Oxford Studies in Philosophy of Language Volume 3.
    Download  
     
    Export citation  
     
    Bookmark  
  14. Introduction to the Special Issue - LLMs and Writing.Syed AbuMusab - 2024 - Teaching Philosophy 47 (2):139-142.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  17. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  64
    Are publicly available (personal) data “up for grabs”? Three privacy arguments.Elisa Orrù - 2024 - In Paul De Hert, Hideyuki Matsumi, Dara Hallinan, Diana Dimitrova & Eleni Kosta (eds.), Data Protection and Privacy, Volume 16: Ideas That Drive Our Digital World. London: Hart. pp. 105-123.
    The re-use of publicly available (personal) data for originally unanticipated purposes has become common practice. Without such secondary uses, the development of many AI systems like large language models (LLMs) and ChatGPT would not even have been possible. This chapter addresses the ethical implications of such secondary processing, with a particular focus on data protection and privacy issues. Legal and ethical evaluations of secondary processing of publicly available personal data diverge considerably both among scholars and the general public. While (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Tecnología, cognición y ética: reflexiones sobre inteligencia artificial y desarrollo neuronal.Fabio Morandin-Ahuerma, Abelardo Romero-Fernández & Rodrigo López-Casas - 2024 - Multidisciplinary Research Designs Vol. 2.
    La inteligencia artificial tiene como objetivo incrementar la productividad y mejorar las capacidades de las personas para realizar tareas de manera eficiente. Sin embargo, el uso excesivo de la inteligencia artificial, como los grandes modelos de lenguaje (LLM) tales como ChatGPT, Gemini, Copilot, LLaMa, Bing, entre otros, podría tener un efecto contrario. La automatización de procesos por parte de las máquinas puede llegar a representar una amenaza para el desarrollo neuronal de los usuarios, lo que eventualmente podría conducir a una (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the quality, accuracy, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The Age of Superintelligence: ~Capitalism to Broken Communism~.R. Ishizaki & Mahito Sugiyama - manuscript
    In this study, we metaphysically discuss how societal values will change and what will happen to the world when superintelligence is safely realized. By providing a mathematical definition of superintelligence, we examine the phenomena derived from this thesis. If an intelligence explosion is triggered under safe management through advanced AI technologies such as large language models (LLMs), it is thought that a modern form of broken communism—where rights are bifurcated from the capitalist system—will first emerge. In that era, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  27. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  28. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  29. Texts Without Authors: Ascribing Literary Meaning in the Case of AI.Sofie Vlaad - forthcoming - Journal of Aesthetics and Art Criticism.
    With the increasing popularity of Large Language Models (LLMs), there has been an increase in the number of AI generated literary works. In the absence of clear authors, and assuming such works have meaning, there lies a puzzle in determining who or what fixes the meaning of such texts. I give an overview of six leading theories for ascribing meaning to literary works. These are Extreme Actual Intentionalism, Modest Actual Intentionalism (1 & 2), Conventionalism, Actual Author Hypothetical Intentionalism, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Large language models belong in our social ontology.Syed AbuMusab - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    The recent advances in Large Language Models (LLMs) and their deployment in social settings prompt an important philosophical question: are LLMs social agents? This question finds its roots in the broader exploration of what engenders sociality. Since AI systems like chatbots, carebots, and sexbots are expanding the pre-theoretical boundaries of our social ontology, philosophers have two options. One is to deny LLMs membership in our social ontology on theoretical grounds by claiming something along the lines that only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models ( (...)). We explore the constitution of the training data for GPT-3 and compare that to the world’s language and internet access demographics, as well as to reported statistical profiles of dominant values in some Nation-states. We stress tested GPT-3 with a range of value-rich texts representing several languages and nations; including some with values orthogonal to dominant US public opinion as reported by the World Values Survey. We observed when values embedded in the input text were mutated in the generated outputs and noted when these conflicting values were more aligned with reported dominant US values. Our discussion of these results uses a moral value pluralism (MVP) lens to better understand these value mutations. Finally, we provide recommendations for how our work may contribute to other current work in the field. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  37. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Does ChatGPT Have a Mind?Simon Goldstein & Benjamin Anders Levinstein - manuscript
    This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Conversations with Chatbots.P. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul (eds.), Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. (1 other version)Right to Silence-UK, U.S, France, Germany.Sally Serena Ramage - 2008 - Current Criminal Law 1 (2):2-30.
    RIGHT TO SILENCE-UK, U.S, FRANCE, and GERMANY SALLY RAMAGE (TRADE MARK REGISTERED) WIPO Orchid ID 0000-0002-8854-4293 Pages 2-30 Current Criminal Law, Volume 1, Issue 2, -/- Sally Ramage, BA (Hons), MBA, LLM, MPhil, MCIJ, MCMI, DA., ASLS, BAWP. Orchid ID 0000-0002-8854-4293 Publisher & Managing Editor Criminal Lawyer series [1980-2022](ISSN 2049-8047) Current Criminal Law series [2008-2022] (ISSN 1758-8405) and Criminal Law News series [2008-2022] (ISSN 1758-8421). Sweet & Maxwell (Thomson Reuters) (Licensed Annotator of UK Statutes) in annual law books Current Law (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Criminal offences and regulatory breaches in using social networking evidence in personal injury litigation.Sally Serena Ramage - 2010 - Current Criminal Law 2 (3):2-7.
    Criminal offences and regulatory breaches in using social networking evidence in personal injury litigation Pages 2-7 Current Criminal Law ISSN 1758-8405 Volume 2 Issue 3 March 2010 Author SALLY RAMAGE WIPO 900614 UK TM 2401827 USA TM 3,440.910 Orchid ID 0000-0002-8854-4293 Sally Ramage, BA (Hons), MBA, LLM, MPhil, MCIJ, MCMI, DA., ASLS, BAWP. Publisher & Managing Editor, Criminal Lawyer series [1980-2022](ISSN 2049-8047); Current Criminal Law series [2008-2022] (ISSN 1758-8405) and Criminal Law News series [2008-2022] (ISSN 1758-8421). Sweet & Maxwell (Thomson (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Download  
     
    Export citation  
     
    Bookmark  
  50. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 79