Results for 'LLM'

15 found
Order:
  1. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  2.  73
    Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  3. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - unknown - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is made of. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  80
    Consent GPT: Is it ethical to delegate procedural consent to conversational AI?Jemima Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - manuscript
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (e.g. junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  48
    Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Why you are (probably) anthropomorphizing AI.Ali Hasan - manuscript
    In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential future role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  63
    The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of power and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  46
    Facing Janus: An Explanation of the Motivations and Dangers of AI Development.Aaron Graifman - manuscript
    This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  47
    Right to Silence-UK, U.S, France, Germany.Sally Serena Ramage - 2008 - Current Criminal Law 1 (2):2-30.
    RIGHT TO SILENCE-UK, U.S, FRANCE, and GERMANY SALLY RAMAGE (TRADE MARK REGISTERED) WIPO Orchid ID 0000-0002-8854-4293 Pages 2-30 Current Criminal Law, Volume 1, Issue 2, -/- Sally Ramage, BA (Hons), MBA, LLM, MPhil, MCIJ, MCMI, DA., ASLS, BAWP. Orchid ID 0000-0002-8854-4293 Publisher & Managing Editor Criminal Lawyer series [1980-2022](ISSN 2049-8047) Current Criminal Law series [2008-2022] (ISSN 1758-8405) and Criminal Law News series [2008-2022] (ISSN 1758-8421). Sweet & Maxwell (Thomson Reuters) (Licensed Annotator of UK Statutes) in annual law books Current Law (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  40
    Criminal offences and regulatory breaches in using social networking evidence in personal injury litigation.Sally Serena Ramage - 2010 - Current Criminal Law 2 (3):2-7.
    Criminal offences and regulatory breaches in using social networking evidence in personal injury litigation Pages 2-7 Current Criminal Law ISSN 1758-8405 Volume 2 Issue 3 March 2010 Author SALLY RAMAGE WIPO 900614 UK TM 2401827 USA TM 3,440.910 Orchid ID 0000-0002-8854-4293 Sally Ramage, BA (Hons), MBA, LLM, MPhil, MCIJ, MCMI, DA., ASLS, BAWP. Publisher & Managing Editor, Criminal Lawyer series [1980-2022](ISSN 2049-8047); Current Criminal Law series [2008-2022] (ISSN 1758-8405) and Criminal Law News series [2008-2022] (ISSN 1758-8421). Sweet & Maxwell (Thomson (...)
    Download  
     
    Export citation  
     
    Bookmark