Results for 'large language models, artificial intelligence, LLMs, AI, philosophical dialogues, philosophy pedagogy'

966 found
Order:
  1. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  53
    Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Large Language Models: Assessment for Singularity.R. Ishizaki & Mahito Sugiyama - manuscript
    The potential for Large Language Models (LLMs) to attain technological singularity—the point at which artificial intelligence (AI) surpasses human intellect and autonomously improves itself—is a critical concern in AI research. This paper explores the feasibility of current LLMs achieving singularity by examining the philosophical and practical requirements for such a development. We begin with a historical overview of AI and intelligence amplification, tracing the evolution of LLMs from their origins to state-of-the-art models. We then proposes a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6.  91
    A Philosophical Dialogue on the Nature of Intelligence.Salvador D. Escobedo - manuscript
    The use of artificial intelligence in philosophy opens new avenues for inquiry, particularly through dialogical methods inspired by the Socratic tradition. This paper exemplifies the engagement with ChatGPT-4o by OpenAI as a philosophical interlocutor, highlighting how this format facilitates a clear distinction between the philosopher's contributions and those generated by the AI. By allowing the philosopher to lead the dialogue, this technique liberates him from the constraints of drafting and formal writing, enabling a more spontaneous exploration of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. No Qualia? No Meaning (and no AGI)!Marco Masi - manuscript
    The recent developments in artificial intelligence (AI), particularly in light of the impressive capabilities of transformer-based Large Language Models (LLMs), have reignited the discussion in cognitive science regarding whether computational devices could possess semantic understanding or whether they are merely mimicking human intelligence. Recent research has highlighted limitations in LLMs’ reasoning, suggesting that the gap between mere symbol manipulation (syntax) and deeper understanding (semantics) remains wide open. While LLMs overcome certain aspects of the symbol grounding problem through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Download  
     
    Export citation  
     
    Bookmark  
  10. Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Blurring the Line Between Human and Machine Minds: Is U.S. Law Ready for Artificial Intelligence?Kipp Coddington & Saman Aryana - manuscript
    This Essay discusses whether U.S. law is ready for artificial intelligence (“AI”) which is headed down the road of blurring the line between human and machine minds. Perhaps the most high-profile and recent examples of AI are Large Language Models (“LLMs”) such as ChatGPT and Google Gemini that can generate written text, reason and analyze in a manner that seems to mimic human capabilities. U.S. law is based on English common law, which in turn incorporates Christian principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Epistemological Alchemy through the hermeneutics of Bits and Bytes.Shahnawaz Akhtar - manuscript
    This paper delves into the profound advancements of Large Language Models (LLMs), epitomized by GPT-3, in natural language processing and artificial intelligence. It explores the epistemological foundations of LLMs through the lenses of Aristotle and Kant, revealing apparent distinctions from human learning. Transitioning seamlessly, the paper then delves into the ethical landscape, extending beyond knowledge acquisition to scrutinize the implications of LLMs in decision-making and content creation. The ethical scrutiny, employing virtue ethics, deontological ethics, and teleological (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Does ChatGPT Have a Mind?Simon Goldstein & Benjamin Anders Levinstein - manuscript
    This paper examines the question of whether Large Language Models (LLMs) like ChatGPT possess minds, focusing specifically on whether they have a genuine folk psychology encompassing beliefs, desires, and intentions. We approach this question by investigating two key aspects: internal representations and dispositions to act. First, we survey various philosophical theories of representation, including informational, causal, structural, and teleosemantic accounts, arguing that LLMs satisfy key conditions proposed by each. We draw on recent interpretability research in machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  21. Mind and Machine: A Philosophical Examination of Matt Carter’s “Minds & Computers: An Introduction to the Philosophy of Artificial Intelligence”.R. L. Tripathi - 2024 - Open Access Journal of Data Science and Artificial Intelligence 2 (1):3.
    In his book “Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence”, Matt Carter presents a comprehensive exploration of the philosophical questions surrounding artificial intelligence (AI). Carter argues that the development of AI is not merely a technological challenge but fundamentally a philosophical one. He delves into key issues like the nature of mental states, the limits of introspection, the implications of memory decay, and the functionalist framework that allows for the possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Defining Generative Artificial Intelligence: An Attempt to Resolve the Confusion about Diffusion.Raphael Ronge, Markus Maier & Benjamin Rathgeber - manuscript
    The concept of Generative Artificial Intelligence (GenAI) is ubiquitous in the public and semi-technical domain, yet rarely defined precisely. We clarify main concepts that are usually discussed in connection to GenAI and argue that one ought to distinguish between the technical and the public discourse. In order to show its complex development and associated conceptual ambiguities, we offer a historical-systematic reconstruction of GenAI and explicitly discuss two exemplary cases: the generative status of the Large Language Model BERT (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  25. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau (eds.), Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - forthcoming - Social Epistemology.
    Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  69
    Standards for Belief Representations in LLMs.Daniel A. Herrmann & Benjamin A. Levinstein - 2024 - Minds and Machines 35 (1):1-25.
    As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the project (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  99
    Large language models belong in our social ontology.Syed AbuMusab - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    The recent advances in Large Language Models (LLMs) and their deployment in social settings prompt an important philosophical question: are LLMs social agents? This question finds its roots in the broader exploration of what engenders sociality. Since AI systems like chatbots, carebots, and sexbots are expanding the pre-theoretical boundaries of our social ontology, philosophers have two options. One is to deny LLMs membership in our social ontology on theoretical grounds by claiming something along the lines that only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  96
    AI and access to justice: How AI legal advisors can reduce economic and shame-based barriers to justice.Brandon Long & Amitabha Palmer - 2024 - TATuP 33 (1).
    ChatGPT – a large language model – recently passed the U.S. bar exam. The startling rise and power of generative artificial intelligence (AI) systems such as ChatGPT lead us to consider whether and how more specialized systems could be used to overcome existing barriers to the legal system. Such systems could be employed in either of the two major stages of the pursuit of justice: preliminary information gathering and formal engagement with the state’s legal institutions and professionals. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Simulative reasoning, common-sense psychology and artificial intelligence.John A. Barnden - 1995 - In Martin Davies & Tony Stone (eds.), Mental Simulation: Evaluations and Applications. Blackwell. pp. 247--273.
    The notion of Simulative Reasoning in the study of propositional attitudes within Artificial Intelligence (AI) is strongly related to the Simulation Theory of mental ascription in Philosophy. Roughly speaking, when an AI system engages in Simulative Reasoning about a target agent, it reasons with that agent’s beliefs as temporary hypotheses of its own, thereby coming to conclusions about what the agent might conclude or might have concluded. The contrast is with non-simulative meta-reasoning, where the AI system reasons within (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Social AI and The Equation of Wittgenstein’s Language User With Calvino’s Literature Machine.Warmhold Jan Thomas Mollema - 2024 - International Review of Literary Studies 6 (1):39-55.
    Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. AI language models cannot replace human research participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - 2024 - AI and Society 39 (5):2603-2605.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - 2024 - Transactions of the Association for Computational Linguistics 12:1087-1103.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Babbling stochastic parrots? A Kripkean argument for reference in large language models.Steffen Koch - forthcoming - Philosophy of Ai.
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38.  35
    Simulacra as Conscious Exotica.Murray Shanahan - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    The advent of conversational agents with increasingly human-like behaviour throws old philosophical questions into new light. Does it, or could it, ever make sense to speak of AI agents built out of generative language models in terms of consciousness, given that they are ‘mere’ simulacra of human behaviour, and that what they do can be seen as ‘merely’ role play? Drawing on the later writings of Wittgenstein, this paper attempts to tackle this question while avoiding the pitfalls of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Why ChatGPT Doesn’t Think: An Argument from Rationality.Daniel Stoljar & Zhihe Vincent Zhang - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Can AI systems such as ChatGPT think? We present an argument from rationality for the negative answer to this question. The argument is founded on two central ideas. The first is that if ChatGPT thinks, it is not rational, in the sense that it does not respond correctly to its evidence. The second idea, which appears in several different forms in philosophical literature, is that thinkers are by their nature rational. Putting the two ideas together yields the result that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Artificial Intelligence and Contemporary Philosophy: Heidegger, Jonas, and Slime Mold.Masahiro Morioka - 2023 - Journal of Philosophy of Life Vol.13, No.1.
    In this paper, I provide an overview of today’s philosophical approaches to the problem of “intelligence” in the field of artificial intelligence by examining several important papers on phenomenology and the philosophy of biology such as those on Heideggerian AI, Jonas's metabolism model, and slime mold type intelligence.
    Download  
     
    Export citation  
     
    Bookmark  
  44. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - forthcoming - Studies in Logic, Grammar and Rhetoric.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Conceptual Engineering Using Large Language Models.Bradley Allen - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    We describe a method, based on Jennifer Nado’s proposal for classification procedures as targets of conceptual engineering, that implements such procedures by prompting a large language model. We apply this method, using data from the Wikidata knowledge graph, to evaluate stipulative definitions related to two paradigmatic conceptual engineering projects: the International Astronomical Union’s redefinition of PLANET and Haslanger’s ameliorative analysis of WOMAN. Our results show that classification procedures built using our approach can exhibit good classification performance and, through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in the language and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Are we at the start of the artificial intelligence era in academic publishing?Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin & Tam-Tri Le - 2023 - Science Editing 10 (2):1-7.
    Machine-based automation has long been a key factor in the modern era. However, lately, many people have been shocked by artificial intelligence (AI) applications, such as ChatGPT (OpenAI), that can perform tasks previously thought to be human-exclusive. With recent advances in natural language processing (NLP) technologies, AI can generate written content that is similar to human-made products, and this ability has a variety of applications. As the technology of large language models continues to progress by making (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  14
    Über Möglichkeiten und Grenzen der Ethik der Künstlichen Intelligenz. Eine Bestandsaufnahme am Beispiel von Sprachverarbeitungssystemen.Elisa Orrù - 2021 - Positionen 35:50-64.
    On the possibilities and limits of the ethics of artificial intelligence. An overview of current developments and debates with a focus on language processing systems. -/- Driven by the success of artificial intelligence (AI), the ethics of AI is currently enjoying a boom. Advice from ethics experts is increasingly being sought by policymakers and industry to proactively identify the risks associated with new AI technologies and to propose solutions. But how realistic are the expectations placed on AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 966