Switch to: References

Add citations

You must login to add citations.
  1. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • How do people judge the credibility of algorithmic sources?Donghee Shin - 2022 - AI and Society 37 (1):81-96.
    The exponential growth of algorithms has made establishing a trusted relationship between human and artificial intelligence increasingly important. Algorithm systems such as chatbots can play an important role in assessing a user’s credibility on algorithms. Unless users believe the chatbot’s information is credible, they are not likely to be willing to act on the recommendation. This study examines how literacy and user trust influence perceptions of chatbot information credibility. Results confirm that algorithmic literacy and users’ trust play a pivotal role (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - 2024 - AI and Society 39 (6):2691-2702.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical Perceptions of AI in Hiring and Organizational Trust: The Role of Performance Expectancy and Social Influence.Maria Figueroa-Armijos, Brent B. Clark & Serge P. da Motta Veiga - 2023 - Journal of Business Ethics 186 (1):179-197.
    The use of artificial intelligence (AI) in hiring entails vast ethical challenges. As such, using an ethical lens to study this phenomenon is to better understand whether and how AI matters in hiring. In this paper, we examine whether ethical perceptions of using AI in the hiring process influence individuals’ trust in the organizations that use it. Building on the organizational trust model and the unified theory of acceptance and use of technology, we explore whether ethical perceptions are shaped by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How much do you trust me? A logico-mathematical analysis of the concept of the intensity of trust.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2023 - Synthese 201 (6):1-30.
    Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including _q_-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accounts of trust that attempt to identify the special ingredient of reliance and trust relationships, our theory characterizes trust as a quantitative property of certain relations of reliance that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust and Trustworthiness in AI.Juan Manuel Durán & Giorgia Pozzi - 2025 - Philosophy and Technology 38 (1):1-31.
    Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards trustworthy blockchains: normative reflections on blockchain-enabled virtual institutions.Yan Teng - 2021 - Ethics and Information Technology 23 (3):385-397.
    This paper proposes a novel way to understand trust in blockchain technology by analogy with trust placed in institutions. In support of the analysis, a detailed investigation of institutional trust is provided, which is then used as the basis for understanding the nature and ethical limits of blockchain trust. Two interrelated arguments are presented. First, given blockchains’ capacity for being institution-like entities by inviting expectations similar to those invited by traditional institutions, blockchain trust is argued to be best conceptualized as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can We Trust Artificial Intelligence?Christian Budnik - 2025 - Philosophy and Technology 38 (1):1-23.
    In view of the dramatic advancements in the development of artificial intelligence technology in recent years, it has become a commonplace to demand that AI systems be trustworthy. This view presupposes that it is possible to trust AI technology in the first place. The aim of this paper is to challenge this view. In order to do that, it is argued that the philosophy of trust really revolves around the problem of how to square the epistemic and the normative dimensions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and Power in Airbnb’s Digital Rating and Reputation System.Tim Christiaens - 2025 - Ethics and Information Technology (2):1-13.
    Customer ratings and reviews are playing a key role in the contemporary platform economy. To establish trust among stran- gers without having to directly monitor platform users themselves, companies ask people to evaluate each other. Firms like Uber, Deliveroo, or Airbnb construct digital reputation scores by combining these consumer data with their own information from the algorithmic surveillance of workers. Trustworthy behavior is subsequently rewarded with a good reputation score and higher potential earnings, while untrustworthy behavior can be algorithmically penalized. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions.Michaela Benk, Sophie Kerstan, Florian von Wangenheim & Andrea Ferrario - forthcoming - AI and Society:1-24.
    Trust is widely regarded as a critical component to building artificial intelligence (AI) systems that people will use and safely rely upon. As research in this area continues to evolve, it becomes imperative that the research community synchronizes its empirical efforts and aligns on the path toward effective knowledge creation. To lay the groundwork toward achieving this objective, we performed a comprehensive bibliometric analysis, supplemented with a qualitative content analysis of over two decades of empirical research measuring trust in AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophy of education in a changing digital environment: an epistemological scope of the problem.Raigul Salimova, Jamilya Nurmanbetova, Maira Kozhamzharova, Mira Manassova & Saltanat Aubakirova - forthcoming - AI and Society:1-12.
    The relevance of this study's topic is supported by the argument that a philosophical understanding of the fundamental concepts of epistemology as they pertain to the educational process is crucial as the educational setting becomes increasingly digitalised. This paper aims to explore the epistemological component of the philosophy of learning in light of the educational process digitalisation. The research comprised a sample of 462 university students from Kazakhstan, with 227 participants assigned to the experimental and 235 to the control groups. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ‘Worldview’ of the AIGC systems: stability, tendency and polarization.Hexiang Liu - forthcoming - AI and Society:1-14.
    This study aims to investigate the worldview characteristics of current systems of artificial intelligence generated content (AIGC). Eight representative AIGC systems is selected as research objects and elicited responses and ratings to the viewpoints in Devlin’s CWQ worldview scale through a unified questioning approach. Based on the item-by-item ratings provided by the systems, the worldviews reflected from the AIGC systems were analyzed from three aspects: stability, tendency, and polarity. The research found that AIGC systems demonstrate general stability and relatively consistent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.Calvin Wai-Loon Ho & Karel Caals - 2024 - Asian Bioethics Review 16 (3):345-372.
    With focus on the development and use of artificial intelligence (AI) systems in the digital health context, we consider the following questions: How does the European Union (EU) seek to facilitate the development and uptake of trustworthy AI systems through the AI Act? What does trustworthiness and trust mean in the AI Act, and how are they linked to some of the ongoing discussions of these terms in bioethics, law, and philosophy? What are the normative components of trustworthiness? And how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust as a Solution to Human Vulnerability: Ethical Considerations on Trust in Care Robots.Mario Kropf - 2025 - Nursing Philosophy 26 (2):e70020.
    In the care sector, professionals face numerous challenges, such as a lack of resources, overloaded wards, physical and psychological strain, stressful constellations with patients and cooperation with medical professionals. Care robots are therefore increasingly being used to provide relief or to test new forms of interaction. However, this also raises the question of trust in these technical companions and the potential vulnerability to which these people then expose themselves. This article deals with an ethical analysis of the two concepts of (...)
    Download  
     
    Export citation  
     
    Bookmark