Switch to: References

Add citations

You must login to add citations.
  1. Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can AlphaGo be apt subjects for Praise/Blame for "Move 37"?Mubarak Hussain - 2023 - Aies '23: Aaai/Acm Conference on Ai, Ethics, and Society, Montréal, Qc, Canada, August.
    This paper examines whether machines (algorithms/programs/ AI systems) are apt subjects for praise or blame for some actions or performances. I consider "Move 37" of AlphaGo as a case study. DeepMind’s AlphaGo is an AI algorithm developed to play the game of Go. The AlphaGo utilizes Deep Neural Networks. As AlphaGo is trained through reinforcement learning, the AI algorithm can improve itself over a period of time. Such AI models can go beyond the intended task and perform novel and unpredictable (...)
    Download  
     
    Export citation  
     
    Bookmark