Results for 'Large Language Model'

957 found
Order:
  1. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  96
    Can large language models help solve the cost problem for the right to explanation?Lauritz Munch & Jens Christian Bjerring - forthcoming - Journal of Medical Ethics.
    By now a consensus has emerged that people, when subjected to high-stakes decisions through automated decision systems, have a moral right to have these decisions explained to them. However, furnishing such explanations can be costly. So the right to an explanation creates what we call the cost problem: providing subjects of automated decisions with appropriate explanations of the grounds of these decisions can be costly for the companies and organisations that use these automated decision systems. In this paper, we explore (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  5. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - forthcoming - Social Epistemology.
    Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  32
    Ontologies, arguments, and Large-Language Models.John Beverley, Francesco Franda, Hedi Karray, Dan Maxwell, Carter Benson & Barry Smith - 2024 - In Ítalo Oliveira (ed.), Joint Ontologies Workshops (JOWO). Twente, Netherlands: CEUR. pp. 1-9.
    Abstract The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly, there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification for model outputs as well as traceability concerning the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Large language models belong in our social ontology.Syed AbuMusab - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    The recent advances in Large Language Models (LLMs) and their deployment in social settings prompt an important philosophical question: are LLMs social agents? This question finds its roots in the broader exploration of what engenders sociality. Since AI systems like chatbots, carebots, and sexbots are expanding the pre-theoretical boundaries of our social ontology, philosophers have two options. One is to deny LLMs membership in our social ontology on theoretical grounds by claiming something along the lines that only organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. On Political Theory and Large Language Models.Emma Rodman - 2024 - Political Theory 52 (4):548-580.
    Political theory as a discipline has long been skeptical of computational methods. In this paper, I argue that it is time for theory to make a perspectival shift on these methods. Specifically, we should consider integrating recently developed generative large language models like GPT-4 as tools to support our creative work as theorists. Ultimately, I suggest that political theorists should embrace this technology as a method of supporting our capacity for creativity—but that we should do so in a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. Conceptual Engineering Using Large Language Models.Bradley Allen - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    We describe a method, based on Jennifer Nado’s proposal for classification procedures as targets of conceptual engineering, that implements such procedures by prompting a large language model. We apply this method, using data from the Wikidata knowledge graph, to evaluate stipulative definitions related to two paradigmatic conceptual engineering projects: the International Astronomical Union’s redefinition of PLANET and Haslanger’s ameliorative analysis of WOMAN. Our results show that classification procedures built using our approach can exhibit good classification performance and, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Prompting Metalinguistic Awareness in Large Language Models: ChatGPT and Bias Effects on the Grammar of Italian and Italian Varieties.Angelapia Massaro & Giuseppe Samo - 2023 - Verbum 14.
    We explore ChatGPT’s handling of left-peripheral phenomena in Italian and Italian varieties through prompt engineering to investigate 1) forms of syntactic bias in the model, 2) the model’s metalinguistic awareness in relation to reorderings of canonical clauses (e.g., Topics) and certain grammatical categories (object clitics). A further question concerns the content of the model’s sources of training data: how are minor languages included in the model’s training? The results of our investigation show that 1) the (...) seems to be biased against reorderings, labelling them as archaic even though it is not the case; 2) the model seems to have difficulties with coindexed elements such as clitics and their anaphoric status, labeling them as ‘not referring to any element in the phrase’, and 3) major languages still seem to be dominant, overshadowing the positive effects of including minor languages in the model’s training. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  84
    Classifying Genetic Essentialist Biases using Large Language Models.Ritsaart Reimann, Kate Lynch, Stefan Gawronski, Jack Chan & Paul Edmund Griffiths - manuscript
    The rapid rise of generative AI, including LLMs, has prompted a great deal of concern, both within and beyond academia. One of these concerns is that generative models embed, reproduce, and therein potentially perpetuate all manner of bias. The present study offers an alternative perspective: exploring the potential of LLMs to detect bias in human generated text. Our target is genetic essentialism in obesity discourse in Australian print media. We develop and deploy an LLM-based classification model to evaluate a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. AI language models cannot replace human research participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - 2024 - AI and Society 39 (5):2603-2605.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to be understood as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  32. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Does ChatGPT Have a Mind?Simon Goldstein & Benjamin Anders Levinstein - manuscript
    This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  69
    Are publicly available (personal) data “up for grabs”? Three privacy arguments.Elisa Orrù - 2024 - In Paul De Hert, Hideyuki Matsumi, Dara Hallinan, Diana Dimitrova & Eleni Kosta (eds.), Data Protection and Privacy, Volume 16: Ideas That Drive Our Digital World. London: Hart. pp. 105-123.
    The re-use of publicly available (personal) data for originally unanticipated purposes has become common practice. Without such secondary uses, the development of many AI systems like large language models (LLMs) and ChatGPT would not even have been possible. This chapter addresses the ethical implications of such secondary processing, with a particular focus on data protection and privacy issues. Legal and ethical evaluations of secondary processing of publicly available personal data diverge considerably both among scholars and the general public. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  41.  51
    The Computational Search for Unity: Synthesis in Generative AI.M. Beatrice Fazi - 2024 - Journal of Continental Philosophy 5 (1):31-56.
    The outputs of generative artificial intelligence (generative AI) are often called “synthetic” to imply that they are not natural but artificial. Against that use of the term, this article focuses on a different denotation of synthesis, stressing the unifying and compositional aspects of anything synthetic. The case of large language models (LLMs) is used as an example to address synthesis philosophically alongside notions of representation in contemporary computational systems. It is argued that synthesis in generative AI should be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  47. Standards for Belief Representations in LLMs.Daniel A. Herrmann & Benjamin A. Levinstein - 2024 - Minds and Machines 35 (1):1-25.
    As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the project (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 957