Switch to: References

Add citations

You must login to add citations.
  1. Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Man Behind the Curtain: Appropriating Fairness in AI.Marcin Korecki, Guillaume Köstner, Emanuele Martinelli & Cesare Carissimo - 2024 - Minds and Machines 34 (1):1-30.
    Our goal in this paper is to establish a set of criteria for understanding the meaning and sources of attributing (un)fairness to AI algorithms. To do so, we first establish that (un)fairness, like other normative notions, can be understood in a proper primary sense and in secondary senses derived by analogy. We argue that AI algorithms cannot be said to be (un)fair in the proper sense due to a set of criteria related to normativity and agency. However, we demonstrate how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems.Caterina Moruzzi - 2023 - Journal of Aesthetics and Phenomenology 9 (2):245-268.
    1. In the last decade, technological systems based on Artificial Intelligence (AI) architectures entered our lives at an increasingly fast pace. Virtual assistants facilitate our daily tasks, recom...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethics of Terminology: Can We Use Human Terms to Describe AI?Ophelia Deroy - 2023 - Topoi 42 (3):881-889.
    Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Boundaries of Ecological Ethics: Kant’s Philosophy in Dialog with the “End of Human Exclusiveness” Thesis.Svetlana A. Martynova - 2023 - Kantian Journal 42 (4):86-111.
    The developers of ecological ethics claim that the rationale of anthropocentrism is false. Its main message is that natural complexes and resources exist to be useful to the human being who sees them only from the perspective of using them and does not take into account their intrinsic value. Kant’s anthropocentric teaching argues that the instrumental attitude to nature has its limits. These limits are hard to determine because the anthropocentrists claim that the human being is above nature. Indeed, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...)
    Download  
     
    Export citation  
     
    Bookmark