Switch to: References

Add citations

You must login to add citations.
  1. Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 'You have to put a lot of trust in me': autonomy, trust, and trustworthiness in the context of mobile apps for mental health.Regina Müller, Nadia Primc & Eva Kuhn - 2023 - Medicine, Health Care and Philosophy 26 (3):313-324.
    Trust and trustworthiness are essential for good healthcare, especially in mental healthcare. New technologies, such as mobile health apps, can affect trust relationships. In mental health, some apps need the trust of their users for therapeutic efficacy and explicitly ask for it, for example, through an avatar. Suppose an artificial character in an app delivers healthcare. In that case, the following questions arise: Whom does the user direct their trust to? Whether and when can an avatar be considered trustworthy? Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Listening to algorithms: The case of self‐knowledge.Casey Doyle - forthcoming - European Journal of Philosophy.
    This paper begins with the thought that there is something out of place about offloading inquiry into one's own mind to AI. The paper's primary goal is to articulate the unease felt when considering cases of doing so. It draws a parallel between the use of algorithms in the criminal law: in both cases one feels entitled to be treated as an exception to a verdict made on the basis of a certain kind of evidence. Then it identifies an account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The prospect of artificial-intelligence supported ethics review.Philip J. Nickel - forthcoming - Ethics and Human Research.
    The burden of research ethics review falls not just on researchers, but on those who serve on research ethics committees (RECs). With the advent of automated text analysis and generative artificial intelligence, it has recently become possible to teach models to support human judgment, for example by highlighting relevant parts of a text and suggesting actionable precedents and explanations. It is time to consider how such tools might be used to support ethics review and oversight. This commentary argues that with (...)
    Download  
     
    Export citation  
     
    Bookmark