Switch to: References

Add citations

You must login to add citations.
  1. Engineering the trust machine. Aligning the concept of trust in the context of blockchain applications.Eva Pöll - 2024 - Ethics and Information Technology 26 (2):1-16.
    Complex technology has become an essential aspect of everyday life. We rely on technology as part of basic infrastructure and repeatedly for tasks throughout the day. Yet, in many cases the relation surpasses mere reliance and evolves to trust in technology. A new, disruptive technology is blockchain. It claims to introduce trustless relationships among its users, aiming to eliminate the need for trust altogether—even being described as “the trust machine”. This paper presents a proposal to adjust the concept of trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Authorship and ChatGPT: a Conservative View.René van Woudenberg, Chris Ranalli & Daniel Bracker - 2024 - Philosophy and Technology 37 (1):1-26.
    Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What Do We Teach to Engineering Students: Embedded Ethics, Morality, and Politics.Avigail Ferdman & Emanuele Ratti - 2024 - Science and Engineering Ethics 30 (1):1-26.
    In the past few years, calls for integrating ethics modules in engineering curricula have multiplied. Despite this positive trend, a number of issues with these ‘embedded’ programs remains. First, learning goals are underspecified. A second limitation is the conflation of different dimensions under the same banner, in particular confusion between ethics curricula geared towards addressing the ethics of individual conduct and curricula geared towards addressing ethics at the societal level. In this article, we propose a tripartite framework to overcome these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Socially Disruptive Technologies and Conceptual Engineering.Herman Veluwenkamp, Jeroen Hopster, Sebastian Köhler & Guido Löhr - 2024 - Ethics and Information Technology 26 (4):1-6.
    In this special issue, we focus on the connection between conceptual engineering and the philosophy of technology. Conceptual engineering is the enterprise of introducing, eliminating, or revising words and concepts. The philosophy of technology examines the nature and significance of technology. We investigate how technologies such as AI and genetic engineering (so-called “socially disruptive technologies”) disrupt our practices and concepts, and how conceptual engineering can address these disruptions. We also consider how conceptual engineering can enhance the practice of ethical design. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptual Engineering and Philosophy of Technology: Amelioration or Adaptation?Jeroen Hopster & Guido Löhr - 2023 - Philosophy and Technology 36 (4):1-17.
    Conceptual Engineering (CE) is thought to be generally aimed at ameliorating deficient concepts. In this paper, we challenge this assumption: we argue that CE is frequently undertaken with the orthogonal aim of _conceptual adaptation_. We develop this thesis with reference to the interplay between technology and concepts. Emerging technologies can exert significant pressure on conceptual systems and spark ‘conceptual disruption’. For example, advances in Artificial Intelligence raise the question of whether AIs are agents or mere objects, which can be construed (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations