Switch to: References

Add citations

You must login to add citations.
  1. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can AlphaGo be apt subjects for Praise/Blame for "Move 37"?Mubarak Hussain - 2023 - Aies '23: Aaai/Acm Conference on Ai, Ethics, and Society, Montréal, Qc, Canada, August.
    This paper examines whether machines (algorithms/programs/ AI systems) are apt subjects for praise or blame for some actions or performances. I consider "Move 37" of AlphaGo as a case study. DeepMind’s AlphaGo is an AI algorithm developed to play the game of Go. The AlphaGo utilizes Deep Neural Networks. As AlphaGo is trained through reinforcement learning, the AI algorithm can improve itself over a period of time. Such AI models can go beyond the intended task and perform novel and unpredictable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Analyzing the justification for using generative AI technology to generate judgments based on the virtue jurisprudence theory.Shilun Zhou - 2024 - Journal of Decision Systems 1:1-24.
    This paper responds to the question of whether judgements generated by judges using ChatGPT can be directly adopted. It posits that it is unjust for judges to rely on and directly adopt ChatGPT-generated judgements based on virtue jurisprudence theory. This paper innovatively applies case-based empirical analysis and is the first to use virtue jurisprudence approach to analyse the question and support its argument. The first section reveals the use of generative AI-based tools in judicial practice and the existence of erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark