Results for 'Modelling Language'

999 found
Order:
  1. Computational Modeling as a Philosophical Methodology.Patrick Grim - 2003 - In Luciano Floridi (ed.), The Blackwell guide to the philosophy of computing and information. Blackwell. pp. 337–349.
    Since the sixties, computational modeling has become increasingly important in both the physical and the social sciences, particularly in physics, theoretical biology, sociology, and economics. Sine the eighties, philosophers too have begun to apply computational modeling to questions in logic, epistemology, philosophy of science, philosophy of mind, philosophy of language, philosophy of biology, ethics, and social and political philosophy. This chapter analyzes a selection of interesting examples in some of those areas.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Ontology-based security modeling in ArchiMate.Ítalo Oliveira, Tiago Prince Sales, João Paulo A. Almeida, Riccardo Baratella, Mattia Fumagalli & Giancarlo Guizzardi - forthcoming - Software and Systems Modeling.
    Enterprise Risk Management involves the process of identification, evaluation, treatment, and communication regarding risks throughout the enterprise. To support the tasks associated with this process, several frameworks and modeling languages have been proposed, such as the Risk and Security Overlay (RSO) of ArchiMate. An ontological investigation of this artifact would reveal its adequacy, capabilities, and limitations w.r.t. the domain of risk and security. Based on that, a language redesign can be proposed as a refinement. Such analysis and redesign have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Modeling Truth.Paul Teller - manuscript
    Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternative (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  5. Modeling future indeterminacy in possibility semantics.Fabrizio Cariani - manuscript
    Possibility semantics offers an elegant framework for a semantic analysis of modal logic that does not recruit fully determinate entities such as possible worlds. The present papers considers the application of possibility semantics to the modeling of the indeterminacy of the future. Interesting theoretical problems arise in connection to the addition of object-language determinacy operator. We argue that adding a two-dimensional layer to possibility semantics can help solve these problems. The resulting system assigns to the two-dimensional determinacy operator a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Download  
     
    Export citation  
     
    Bookmark  
  7. Modeling Unicorns and Dead Cats: Applying Bressan’s ML ν to the Necessary Properties of Non-existent Objects.Tyke Nunez - 2018 - Journal of Philosophical Logic 47 (1):95–121.
    Should objects count as necessarily having certain properties, despite their not having those properties when they do not exist? For example, should a cat that passes out of existence, and so no longer is a cat, nonetheless count as necessarily being a cat? In this essay I examine different ways of adapting Aldo Bressan’s MLν so that it can accommodate an affirmative answer to these questions. Anil Gupta, in The Logic of Common Nouns, creates a number of languages that have (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Language Models as Critical Thinking Tools: A Case Study of Philosophers.Andre Ye, Jared Moore, Rose Novick & Amy Zhang - manuscript
    Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify ideas, and engineer new concepts? We treat philosophy as a case study in critical thinking, and interview 21 professional philosophers about how they engage in critical thinking and on their experiences with LMs. We find that philosophers do not find LMs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Modeling the interaction of computer errors by four-valued contaminating logics.Roberto Ciuni, Thomas Macaulay Ferguson & Damian Szmuc - 2019 - In Rosalie Iemhoff, Michael Moortgat & Ruy de Queiroz (eds.), Logic, Language, Information, and Computation. Folli Publications on Logic, Language and Information. pp. 119-139.
    Logics based on weak Kleene algebra (WKA) and related structures have been recently proposed as a tool for reasoning about flaws in computer programs. The key element of this proposal is the presence, in WKA and related structures, of a non-classical truth-value that is “contaminating” in the sense that whenever the value is assigned to a formula ϕ, any complex formula in which ϕ appears is assigned that value as well. Under such interpretations, the contaminating states represent occurrences of a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Swahili conditional constructions in embodied Frames of Reference: Modeling semantics, pragmatics, and context-sensitivity in UML mental spaces.Roderick Fish - 2020 - Dissertation, Trinity Western University
    Studies of several languages, including Swahili [swa], suggest that realis (actual, realizable) and irrealis (unlikely, counterfactual) meanings vary along a scale (e.g., 0.0–1.0). T-values (True, False) and P-values (probability) account for this pattern. However, logic cannot describe or explain (a) epistemic stances toward beliefs, (b) deontic and dynamic stances toward states-of-being and actions, and (c) context-sensitivity in conditional interpretations. (a)–(b) are deictic properties (positions, distance) of ‘embodied’ Frames of Reference (FoRs)—space-time loci in which agents perceive and from which they contextually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. What is this thing called Philosophy of Science? A computational topic-modeling perspective, 1934–2015.Christophe Malaterre, Jean-François Chartier & Davide Pulizzotto - 2019 - Hopos: The Journal of the International Society for the History of Philosophy of Science 9 (2):215-249.
    What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key research topics (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  14. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  15. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to be understood as existing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Modeling Gender as a Multidimensional Sorites Paradox.Rory W. Collins - 2021 - Hypatia 36 (2):302–320.
    Gender is both indeterminate and multifaceted: many individuals do not fit neatly into accepted gender categories, and a vast number of characteristics are relevant to determining a person's gender. This article demonstrates how these two features, taken together, enable gender to be modeled as a multidimensional sorites paradox. After discussing the diverse terminology used to describe gender, I extend Helen Daly's research into sex classifications in the Olympics and show how varying testosterone levels can be represented using a sorites argument. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Superhumans: Super-Language?Vasil Penchev - 2016 - Dialogue and Universalism 26 (1):79-89.
    The paper questions the scientific rather than ideological problem of an eventual biological successor of the mankind. The concept of superhumans is usually linked to Nietzsche or to Heidegger’s criticism or even to the ideology of Nazism. However, the superhuman can be also viewed as that biological species who will originate from humans eventually in the course of evolution.While the society is reached a natural limitation of globalism, technics depends on the amount of utilized energy, and the mind is restricted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Between Language and Consciousness: Linguistic Qualia, Awareness, and Cognitive Models.Piotr Konderak - 2017 - Studies in Logic, Grammar and Rhetoric 48 (1):285-302.
    The main goal of the paper is to present a putative role of consciousness in language capacity. The paper contrasts the two approaches characteristic for cognitive semiotics and cognitive science. Language is treated as a mental phenomenon and a cognitive faculty. The analysis of language activity is based on the Chalmers’ distinction between the two forms of consciousness: phenomenal and psychological. The approach is seen as an alternative to phenomenological analyses typical for cognitive semiotics. Further, a cognitive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  74
    С.Коева, Е. Ю. Иванова, Й. Тишева, А. Циммерлинг (ред.). Онтология на ситуациите за състояние – лингвистично моделиране. Съпоставително изследване за български и руски. Cофия: "Марин Дринов", 2022. [Svetla Koeva, Elena Yu. Ivanova, Yovka Tisheva, Anton Zimmerling (Eds.). Ontology of Stative Situations – Linguistic Modeling. A Contrastive Bulgarian-Russian Study. Sofia: Marin Drinov. 2022].Svetla Koeva, Elena Ivanova, Yovka Tisheva & Anton Zimmerling - 2022 - Sofia: Профессор "Марин Дринов" [Professor "Marin Drinov"].
    The collective monograph "Ontology of Stative Situations - Linguistic Modeling. A Contrastive Bulgarian-Russian Study" includes research carried out within the project of the same name "Ontology of stative situations – linguistic modeling. A contrastive Bulgarian-Russian study", supported by the "Scientific Research" Fund of the Ministry of Education and Science in Bulgaria (№ КП-06-РУСИЯ / 23) and from the Russian Fund for Fundamental Research (No. 20-512-18005).
    Download  
     
    Export citation  
     
    Bookmark  
  23. Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a theoretical framework to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. THE PHYSICAL STRUCTURE AND FUNCTION OF MIND: A MODERN SCIENTIFIC TRANSLATION OF ADVAITA PHILOSOPHY WITH IMPLICATIONS AND APPLICATION TO COGNITIVE SCIENCES AND NATURAL LANGUAGE COMPREHENSION.Varanasi Ramabrahmam - 2008 - In Proceedings of the national seminar on Sanskrit in the Modern Context conducted by Department of Sanskrit Studies and the School of humanities, University of Hyderabad between11-13, February 2008.
    The famous advaitic expressions -/- Brahma sat jagat mithya jivo brahma eva na apraha and Asti bhaati priyam namam roopamcheti amsa panchakam AAdya trayam brahma roopam tato dwayam jagat roopam -/- will be analyzed through physics and electronics and interpreted. -/- Four phases of mind, four modes of language acquisition and communication and seven cognitive states of mind participating in human cognitive and language acquisition and communication processes will be identified and discussed. -/- Implications and application of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  62
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. “Large Language Models” Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI.Joshua August Skorburg, Kristina L. Kupferschmidt & Graham W. Taylor - 2023 - American Journal of Bioethics 23 (10):110-113.
    Cohen (2023) takes a fair and measured approach to the question of what ChatGPT means for bioethics. The hype cycles around AI often obscure the fact that ethicists have developed robust frameworks...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Models, theories, and language.Jan Faye - 2007 - In Filosofia, scienza e bioetica nel dibattito contemporaneo. Rome: Poligrafico e Zecca dello Stato. pp. 823-838.
    The semantic view on theories has been much in vogue over four decades as the successor of the syntactic view. In the present paper, I take issue with this approach by arguing that theories and models must be separated and that a theory should be considered to be a linguistic systems consisting of a vocabulary and a set of rules for the use of that vocabulary.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Towards a Vygotskyan Cognitive Robotics: The Role of Language as a Cognitive Tool.Marco Mirolli - 2011 - New Ideas in Psychology 29:298-311.
    Cognitive Robotics can be defined as the study of cognitive phenomena by their modeling in physical artifacts such as robots. This is a very lively and fascinating field which has already given fundamental contributions to our understanding of natural cognition. Nonetheless, robotics has to date addressed mainly very basic, low­level cognitive phenomena like sensory­motor coordination, perception, and navigation, and it is not clear how the current approach might scale up to explain high­level human cognition. In this paper we argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  29. Classification of Sign-Language Using Deep Learning - A Comparison between Inception and Xception models.Tanseem N. Abu-Jamie & Samy S. Abu-Naser - 2022 - International Journal of Academic Engineering Research (IJAER) 6 (8):9-19.
    there is a communication gap between hearing-impaired people and those with normal hearing, sign language is the main means of communication in the hearing-impaired population. Continuous sign language recognition, which can close the communication gap, is a difficult task since the ordered annotations are weakly supervised and there is no frame-level label. To solve this issue, we compare the accuracy of each model using two deep learning models, Inception and Xception . To that end, the purpose of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of power (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  86
    MATH HAS ONLY ONE LANGUAGE.Albert Efimov - manuscript
    Sber Science Award 2023 winner in the “Digital Universe” category, full member of the Russian Academy of Sciences, Doctor of Physics and Mathematics, Head of the Chair of Computational Technology and Modeling of the Department of Computational Mathematics and Cybernetics of Moscow State University, Director of the Marchuk Institute for Computational Mathematics of the Russian Academy of Sciences Evgeny Evgenyevich Tyrtyshnikov dedicated his lecture entitled “Dimension: Is it a curse or a blessing?” to methods of presentation of multi-dimensional data based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Psychological and Computational Models of Language Comprehension: In Defense of the Psychological Reality of Syntax.David Pereplyotchik - 2011 - Croatian Journal of Philosophy 11 (1):31-72.
    In this paper, I argue for a modified version of what Devitt calls the Representational Thesis. According to RT, syntactic rules or principles are psychologically real, in the sense that they are represented in the mind/brain of every linguistically competent speaker/hearer. I present a range of behavioral and neurophysiological evidence for the claim that the human sentence processing mechanism constructs mental representations of the syntactic properties of linguistic stimuli. I then survey a range of psychologically plausible computational models of comprehension (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Reading Motivation, Language Learning Self-efficacy and Test-taking Strategy: A Structural Equation Model on Academic Performance of Students.Johnryll C. Ancheta & Melissa C. Napil - 2022 - Asian Journal of Education and Social Studies 34 (4):1-9.
    Reading tough books to achieve excellent marks, perform well in class, and gain attention from teachers and parents is less likely to drive students. Students used to evaluate their language learning requirements, define the abilities they wished to develop, pick effective study techniques, and set aside gadgets when studying. They also used to read the question before looking for hints in the relevant content, extract the essential lines that convey the major ideas, concentrate on titles, names, numbers, quotations, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does not (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Conceptual Engineering Using Large Language Models.Bradley Allen - manuscript
    We describe a method, based on Jennifer Nado's definition of classification procedures as targets of conceptual engineering, that implements such procedures using a large language model. We then apply this method using data from the Wikidata knowledge graph to evaluate concept definitions from two paradigmatic conceptual engineering projects: the International Astronomical Union's redefinition of PLANET and Haslanger's ameliorative analysis of WOMAN. We discuss implications of this work for the theory and practice of conceptual engineering. The code and data can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Static and dynamic vector semantics for lambda calculus models of natural language.Mehrnoosh Sadrzadeh & Reinhard Muskens - 2018 - Journal of Language Modelling 6 (2):319-351.
    Vector models of language are based on the contextual aspects of language, the distributions of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, compositional properties of words and how they compose to form sentences. In the truth conditional approach, the denotation of a sentence determines its truth conditions, which can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. On Political Theory and Large Language Models.Emma Rodman - forthcoming - Political Theory.
    Political theory as a discipline has long been skeptical of computational methods. In this paper, I argue that it is time for theory to make a perspectival shift on these methods. Specifically, we should consider integrating recently developed generative large language models like GPT-4 as tools to support our creative work as theorists. Ultimately, I suggest that political theorists should embrace this technology as a method of supporting our capacity for creativity—but that we should do so in a way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  43. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  84
    Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Does the Principle of Compositionality Explain Productivity? For a Pluralist View of the Role of Formal Languages as Models.Ernesto Perini-Santos - 2017 - Contexts in Philosophy 2017 - CEUR Workshop Proceedings.
    One of the main motivations for having a compositional semantics is the account of the productivity of natural languages. Formal languages are often part of the account of productivity, i.e., of how beings with finite capaci- ties are able to produce and understand a potentially infinite number of sen- tences, by offering a model of this process. This account of productivity con- sists in the generation of proofs in a formal system, that is taken to represent the way speakers grasp (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. From Models to Simulations.Franck Varenne - 2018 - London, UK: Routledge.
    This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows how (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  49. On the relationship between cognitive models and spiritual maps. Evidence from Hebrew language mysticism.Brian L. Lancaster - 2000 - Journal of Consciousness Studies 7 (11-12):11-12.
    It is suggested that the impetus to generate models is probably the most fundamental point of connection between mysticism and psychology. In their concern with the relation between ‘unseen’ realms and the ‘seen’, mystical maps parallel cognitive models of the relation between ‘unconscious’ and ‘conscious’ processes. The map or model constitutes an explanation employing terms current within the respective canon. The case of language mysticism is examined to illustrate the premise that cognitive models may benefit from an understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Prompting Metalinguistic Awareness in Large Language Models: ChatGPT and Bias Effects on the Grammar of Italian and Italian Varieties.Angelapia Massaro & Giuseppe Samo - 2023 - Verbum 14.
    We explore ChatGPT’s handling of left-peripheral phenomena in Italian and Italian varieties through prompt engineering to investigate 1) forms of syntactic bias in the model, 2) the model’s metalinguistic awareness in relation to reorderings of canonical clauses (e.g., Topics) and certain grammatical categories (object clitics). A further question concerns the content of the model’s sources of training data: how are minor languages included in the model’s training? The results of our investigation show that 1) the model seems to be biased (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999