Switch to: References

Add citations

You must login to add citations.
  1. Feminist Re-Engineering of Religion-Based AI Chatbots.Hazel T. Biana - 2024 - Philosophies 9 (1):20.
    Religion-based AI chatbots serve religious practitioners by bringing them godly wisdom through technology. These bots reply to spiritual and worldly questions by drawing insights or citing verses from the Quran, the Bible, the Bhagavad Gita, the Torah, or other holy books. They answer religious and theological queries by claiming to offer historical contexts and providing guidance and counseling to their users. A criticism of these bots is that they may give inaccurate answers and proliferate bias by propagating homogenized versions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Using rhetorical strategies to design prompts: a human-in-the-loop approach to make AI useful.Nupoor Ranade, Marly Saravia & Aditya Johri - forthcoming - AI and Society:1-22.
    The growing capabilities of artificial intelligence (AI) word processing models have demonstrated exceptional potential to impact language related tasks and functions. Their fast pace of adoption and probable effect has also given rise to controversy within certain fields. Models, such as GPT-3, are a particular concern for professionals engaged in writing, particularly as their engagement with these technologies is limited due to lack of ability to control their output. Most efforts to maximize and control output rely on a process known (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Weirdness of the World.Eric Schwitzgebel - 2024 - Princeton University Press.
    How all philosophical explanations of human consciousness and the fundamental structure of the cosmos are bizarre—and why that’s a good thing Do we live inside a simulated reality or a pocket universe embedded in a larger structure about which we know virtually nothing? Is consciousness a purely physical matter, or might it require something extra, something nonphysical? According to the philosopher Eric Schwitzgebel, it’s hard to say. In The Weirdness of the World, Schwitzgebel argues that the answers to these fundamental (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Natural and Artificial Intelligence: A Comparative Analysis of Cognitive Aspects.Francesco Abbate - 2023 - Minds and Machines 33 (4):791-815.
    Moving from a behavioral definition of intelligence, which describes it as the ability to adapt to the surrounding environment and deal effectively with new situations (Anastasi, 1986), this paper explains to what extent the performance obtained by ChatGPT in the linguistic domain can be considered as intelligent behavior and to what extent they cannot. It also explains in what sense the hypothesis of decoupling between cognitive and problem-solving abilities, proposed by Floridi (2017) and Floridi and Chiriatti (2020) should be interpreted. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Extending Introspection.Lukas Schwengerer - 2021 - In Inês Hipólito, Robert William Clowes & Klaus Gärtner (eds.), The Mind-Technology Problem : Investigating Minds, Selves and 21st Century Artefacts. Springer Verlag. pp. 231-251.
    Clark and Chalmers propose that the mind extends further than skin and skull. If they are right, then we should expect this to have some effect on our way of knowing our own mental states. If the content of my notebook can be part of my belief system, then looking at the notebook seems to be a way to get to know my own beliefs. However, it is at least not obvious whether self-ascribing a belief by looking at my notebook (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Strategies for translating machine errors in automatically generated texts (using GPT-4 as an example).В. И Алейникова - 2023 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 1:39-52.
    The article discusses the strategies of translation of «machine texts» on the example of generative transformers (GPT). Currently, the study and development of machine text generation has become an important task for processing and analyzing texts in different languages. Modern technologies of artificial intelligence and neural networks allow us to create powerful tools for activities in this field, which are becoming more and more effective every year. Generative transformers are one of such tools. The study of generative transformers also allows (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust Me on This One: Conforming to Conversational Assistants.Donna Schreuter, Peter van der Putten & Maarten H. Lamers - 2021 - Minds and Machines 31 (4):535-562.
    Conversational artificial agents and artificially intelligent voice assistants are becoming increasingly popular. Digital virtual assistants such as Siri, or conversational devices such as Amazon Echo or Google Home are permeating everyday life, and are designed to be more and more humanlike in their speech. This study investigates the effect this can have on one’s conformity with an AI assistant. In the 1950s, Solomon Asch’s already demonstrated the power and danger of conformity amongst people. In these classical experiments test persons were (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Rights and Wrongs in Talk of Mind-Reading Technology.Stephen Rainey - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-11.
    This article examines the idea of mind-reading technology by focusing on an interesting case of applying a large language model (LLM) to brain data. On the face of it, experimental results appear to show that it is possible to reconstruct mental contents directly from brain data by processing via a chatGPT-like LLM. However, the author argues that this apparent conclusion is not warranted. Through examining how LLMs work, it is shown that they are importantly different from natural language. The former (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - forthcoming - Educational Philosophy and Theory.
    Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (generativ...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3.M. A. Palacios Barea, D. Boeren & J. F. Ferreira Goncalves - forthcoming - AI and Society:1-19.
    Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents.Vitor Oliveira, Gabriel Nogueira, Thiago Faleiros & Ricardo Marcacini - forthcoming - Artificial Intelligence and Law:1-21.
    Named entity recognition (NER) is a very relevant task for text information retrieval in natural language processing (NLP) problems. Most recent state-of-the-art NER methods require humans to annotate and provide useful data for model training. However, using human power to identify, circumscribe and label entities manually can be very expensive in terms of time, money, and effort. This paper investigates the use of prompt-based language models (OpenAI’s GPT-3) and weak supervision in the legal domain. We apply both strategies as alternative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Computational Intelligence Model Phenomenal Consciousness?Eduardo C. Garrido Merchán & Sara Lumbreras - 2023 - Philosophies 8 (4):70.
    Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Incalculability of the Generated Text.Alžbeta Kuchtová - 2024 - Philosophy and Technology 37 (1):1-20.
    In this paper, I explore Derrida’s concept of exteriorization in relation to texts generated by machine learning. I first discuss Heidegger’s view of machine creation and then present Derrida’s criticism of Heidegger. I explain the concept of iterability, which is the central notion on which Derrida’s criticism is based. The thesis defended in the paper is that Derrida’s account of iterability provides a helpful framework for understanding the phenomenon of machine learning–generated literature. His account of textuality highlights the incalculability and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Dual-use implications of AI text generation.Julian J. Koplin - 2023 - Ethics and Information Technology 25 (2):1-11.
    AI researchers have developed sophisticated language models capable of generating paragraphs of 'synthetic text' on topics specified by the user. While AI text generation has legitimate benefits, it could also be misused, potentially to grave effect. For example, AI text generators could be used to automate the production of convincing fake news, or to inundate social media platforms with machine-generated disinformation. This paper argues that AI text generators should be conceptualised as a dual-use technology, outlines some relevant lessons from earlier (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT.Reto Gubelmann - 2023 - Grazer Philosophische Studien 99 (4):485-523.
    In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Future of Work: Augmentation or Stunting?Markus Furendal & Karim Jebari - 2023 - Philosophy and Technology (2):1-22.
    The last decade has seen significant improvements in Artificial Intelligence (AI) technologies, including robotics, machine vision, speech recognition and text generation. Increasing automation will undoubtedly affect the future of work, and discussions on how the development of AI in the workplace will impact labor markets often include two scenarios: (1) labor replacement and (2) labor enabling. The former involves replacing workers with machines, while the latter assumes that human-machine cooperation can significantly improve worker productivity. In this context, it is often (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - forthcoming - Episteme:1-17.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Artificial understanding: a step toward robust AI.Erez Firt - forthcoming - AI and Society:1-13.
    In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The great Transformer: Examining the role of large language models in the political economy of AI.Wiebke Denkena & Dieuwertje Luitse - 2021 - Big Data and Society 8 (2).
    In recent years, AI research has become more and more computationally demanding. In natural language processing, this tendency is reflected in the emergence of large language models like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Semantic Noise and Conceptual Stagnation in Natural Language Processing.Sonia de Jager - 2023 - Angelaki 28 (3):111-132.
    Semantic noise, the effect ensuing from the denotative and thus functional variability exhibited by different terms in different contexts, is a common concern in natural language processing (NLP). While unarguably problematic in specific applications (e.g., certain translation tasks), the main argument of this paper is that failing to observe this linguistic matter of fact as a generative effect rather than as an obstacle, leads to actual obstacles in instances where language model outputs are presented as neutral. Given that a common (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - forthcoming - AI and Society:1-11.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Rethinking “digital”: a genealogical enquiry into the meaning of digital and its impact on individuals and society.Luca Capone, Marta Rocchi & Marta Bertolaso - forthcoming - AI and Society:1-11.
    In the current social and technological scenario, the term digital is abundantly used with an apparently transparent and unambiguous meaning. This article aims to unveil the complexity of this concept, retracing its historical and cultural origin. This genealogical overview allows to understand the reason why an instrumental conception of digital media has prevailed, considering the digital as a mere tool to convey a message, as opposed to a constitutive conception. The constitutive conception places the digital phenomenon in the broader ground (...)
    Download  
     
    Export citation  
     
    Bookmark