Switch to: References

Add citations

You must login to add citations.
  1. Trust, Explainability and AI.Sam Baron - 2025 - Philosophy and Technology 38 (4):1-23.
    There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. In this paper, I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a necessary condition. I thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - 2024 - Science and Engineering Ethics 30 (6):1-21.
    We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust and Trustworthiness in AI.Juan Manuel Durán & Giorgia Pozzi - 2025 - Philosophy and Technology 38 (1):1-31.
    Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The prospect of artificial-intelligence supported ethics review.Philip J. Nickel - 2024 - Ethics and Human Research 46 (6):25-28.
    The burden of research ethics review falls not just on researchers, but on those who serve on research ethics committees (RECs). With the advent of automated text analysis and generative artificial intelligence, it has recently become possible to teach models to support human judgment, for example by highlighting relevant parts of a text and suggesting actionable precedents and explanations. It is time to consider how such tools might be used to support ethics review and oversight. This commentary argues that with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 'You have to put a lot of trust in me': autonomy, trust, and trustworthiness in the context of mobile apps for mental health.Regina Müller, Nadia Primc & Eva Kuhn - 2023 - Medicine, Health Care and Philosophy 26 (3):313-324.
    Trust and trustworthiness are essential for good healthcare, especially in mental healthcare. New technologies, such as mobile health apps, can affect trust relationships. In mental health, some apps need the trust of their users for therapeutic efficacy and explicitly ask for it, for example, through an avatar. Suppose an artificial character in an app delivers healthcare. In that case, the following questions arise: Whom does the user direct their trust to? Whether and when can an avatar be considered trustworthy? Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can We Trust Artificial Intelligence?Christian Budnik - 2025 - Philosophy and Technology 38 (1):1-23.
    In view of the dramatic advancements in the development of artificial intelligence technology in recent years, it has become a commonplace to demand that AI systems be trustworthy. This view presupposes that it is possible to trust AI technology in the first place. The aim of this paper is to challenge this view. In order to do that, it is argued that the philosophy of trust really revolves around the problem of how to square the epistemic and the normative dimensions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Listening to algorithms: The case of self‐knowledge.Casey Doyle - 2025 - European Journal of Philosophy 33 (1):134-147.
    This paper begins with the thought that there is something out of place about offloading inquiry into one's own mind to AI. The paper's primary goal is to articulate the unease felt when considering cases of doing so. It draws a parallel between the use of algorithms in the criminal law: in both cases one feels entitled to be treated as an exception to a verdict made on the basis of a certain kind of evidence. Then it identifies an account (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - 2025 - Medicine, Health Care and Philosophy 28 (1):83-92.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fortifying Trust: Can Computational Reliabilism Overcome Adversarial Attacks?Pawel Pawlowski & Kristian González Barman - 2025 - Philosophy and Technology 38 (1):1-19.
    Computational Reliabilism (CR) has emerged as a promising framework for assessing the trustworthiness of AI systems, particularly in domains where complete transparency is infeasible. However, the rise of sophisticated adversarial attacks poses a significant challenge to CR’s key reliability indicators. This paper critically examines the robustness of CR in the face of evolving adversarial threats, revealing the limitations of verification and validation methods, robustness analysis, implementation history, and expert knowledge when confronted with malicious actors. Our analysis suggests that CR, in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ‘Opacity’ and ‘Trust’: From Concepts and Measurements to Public Policy.Ori Freiman, John McAndrews, Jordan Mansell & Clifton van der Linden - 2025 - Philosophy and Technology 38 (1):1-22.
    This paper provides four insights relating to policy-making, focusing on the complex relationship between the abstract concept of trust—with its numerous empirical expressions—and the concept of opacity in AI technologies. First, we set the ground by discussing the nature of trust as it evolves from interpersonal to technological realms (§ 2), examining the plethora of measurement methods and objects that reflect the concept’s rich diversity. We then investigate the concept of opacity in AI systems (§ 3), challenging the conventional wisdom (...)
    Download  
     
    Export citation  
     
    Bookmark