Results for 'large language models '

1000+ found
Order:
  1. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Download  
     
    Export citation  
     
    Bookmark  
  3. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current (...): for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  4. Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  50
    Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  58
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9.  66
    Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  87
    On Political Theory and Large Language Models.Emma Rodman - forthcoming - Political Theory.
    Political theory as a discipline has long been skeptical of computational methods. In this paper, I argue that it is time for theory to make a perspectival shift on these methods. Specifically, we should consider integrating recently developed generative large language models like GPT-4 as tools to support our creative work as theorists. Ultimately, I suggest that political theorists should embrace this technology as a method of supporting our capacity for creativity—but that we should do so in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Conceptual Engineering Using Large Language Models.Bradley Allen - manuscript
    We describe a method, based on Jennifer Nado's definition of classification procedures as targets of conceptual engineering, that implements such procedures using a large language model. We then apply this method using data from the Wikidata knowledge graph to evaluate concept definitions from two paradigmatic conceptual engineering projects: the International Astronomical Union's redefinition of PLANET and Haslanger's ameliorative analysis of WOMAN. We discuss implications of this work for the theory and practice of conceptual engineering. The code and data (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Prompting Metalinguistic Awareness in Large Language Models: ChatGPT and Bias Effects on the Grammar of Italian and Italian Varieties.Angelapia Massaro & Giuseppe Samo - 2023 - Verbum 14.
    We explore ChatGPT’s handling of left-peripheral phenomena in Italian and Italian varieties through prompt engineering to investigate 1) forms of syntactic bias in the model, 2) the model’s metalinguistic awareness in relation to reorderings of canonical clauses (e.g., Topics) and certain grammatical categories (object clitics). A further question concerns the content of the model’s sources of training data: how are minor languages included in the model’s training? The results of our investigation show that 1) the model seems to be biased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Babbling stochastic parrots? On reference and reference change in large language models.Steffen Koch - manuscript
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - forthcoming - Teaching Philosophy.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  25
    Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - forthcoming - Teaching Philosophy.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - manuscript
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call "bibliotechnism", is that LLMs often do generate entirely novel text. We begin by defending bibliotechnism against this challenge, showing how novel text may be meaningful only in a derivative sense, so that the content of this generated text depends in an important sense on the content of original human text. We go on to present a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding.Stevan Harnad - manuscript
    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Language Models as Critical Thinking Tools: A Case Study of Philosophers.Andre Ye, Jared Moore, Rose Novick & Amy Zhang - manuscript
    Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify ideas, and engineer new concepts? We treat philosophy as a case study in critical thinking, and interview 21 professional philosophers about how they engage in critical thinking and on their experiences with LMs. We find that philosophers do not find (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. A Field Research On The Implementation Of The Lesson Of Arabic Language Teaching Program (Tekirdağ (Turkey)/Süleymanpaşa district as a model).Osman Arpaçukuru - 2018 - Tasavvur - Tekirdag Theology Journal 4 (1):167 - 190.
    Imam Hatip schools (religious vocational schools) in Turkey have been taught teaching Arabic for many years. However, the objectives of learning Arabic have not yet been realized. The Education Council of the Ministry of Education prepares educational plans and programs for Arabic lessons in order to increase the quality of Arabic language teaching, the first of these programs was in 1973. This research is a field study carried out in 2016 on how to implement the educational programs prepared in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  69
    Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Inner-Model Reflection Principles.Neil Barton, Andrés Eduardo Caicedo, Gunter Fuchs, Joel David Hamkins, Jonas Reitz & Ralf Schindler - 2020 - Studia Logica 108 (3):573-595.
    We introduce and consider the inner-model reflection principle, which asserts that whenever a statement \varphi(a) in the first-order language of set theory is true in the set-theoretic universe V, then it is also true in a proper inner model W \subset A. A stronger principle, the ground-model reflection principle, asserts that any such \varphi(a) true in V is also true in some non-trivial ground model of the universe with respect to set forcing. These principles each express a form of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Why you are (probably) anthropomorphizing AI.Ali Hasan - manuscript
    In this paper I argue that, given the way that AI models work and the way that ordinary human rationality works, it is very likely that people are anthropomorphizing AI, with potentially serious consequences. I start with the core idea, recently defended by Thomas Kelly (2022) among others, that bias involves a systematic departure from a genuine standard or norm. I briefly discuss how bias can take on different explicit, implicit, and “truly implicit” (Johnson 2021) forms such as bias (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Pregeometry, Formal Language and Constructivist Foundations of Physics.Xerxes D. Arsiwalla, Hatem Elshatlawy & Dean Rickles - manuscript
    How does one formalize the structure of structures necessary for the foundations of physics? This work is an attempt at conceptualizing the metaphysics of pregeometric structures, upon which new and existing notions of quantum geometry may find a foundation. We discuss the philosophy of pregeometric structures due to Wheeler, Leibniz as well as modern manifestations in topos theory. We draw attention to evidence suggesting that the framework of formal language, in particular, homotopy type theory, provides the conceptual building blocks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Operationalising Representation in Natural Language Processing.Jacqueline Harding - forthcoming - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  41. Sono solo parole ChatGPT: anatomia e raccomandazioni per l’uso.Tommaso Caselli, Antonio Lieto, Malvina Nissim & Viviana Patti - 2023 - Sistemi Intelligenti 4:1-10.
    Download  
     
    Export citation  
     
    Bookmark  
  42. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Epistemic closure filters for natural language inference.Michael Cohen - manuscript
    Epistemic closure refers to the assumption that humans are able to recognize what entails or contradicts what they believe and know, or more accurately, that humans’ epistemic states are closed under logical inferences. Epistemic closure is part of a larger theory of mind ability, which is arguably crucial for downstream NLU tasks, such as inference, QA and conversation. In this project, we introduce a new automatically constructed natural language inference dataset that tests inferences related to epistemic closure. We test (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The human revolution: Editorial introduction to 'honest fakes and language origins' by Chris Knight.Charles Whitehead - 2008 - Journal of Consciousness Studies 15 (10-11):226-235.
    It is now more than twenty years since Knight (1987) first presented his paradigm-shifting theory of how and why the ‘human revolution’ occurred — and had to occur — in modern humans who, as climates dried under ice age conditions and African rainforests shrank, found themselves surrounded by vast prairies and savannahs, with rich herds of game animals roaming across them. The temptation for male hunters, far from any home base, to eat the best portions of meat at the kill (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  94
    Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo (eds.), Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Are we at the start of the artificial intelligence era in academic publishing?Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin & Tam-Tri Le - 2023 - Science Editing 10 (2):1-7.
    Machine-based automation has long been a key factor in the modern era. However, lately, many people have been shocked by artificial intelligence (AI) applications, such as ChatGPT (OpenAI), that can perform tasks previously thought to be human-exclusive. With recent advances in natural language processing (NLP) technologies, AI can generate written content that is similar to human-made products, and this ability has a variety of applications. As the technology of large language models continues to progress by making (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000