Switch to: Citations

Add references

You must login to add references.
  1. (1 other version)Trust, Distrust and Commitment.Katherine Hawley - 2014 - Noûs 48 (1):1-20.
    I outline a number of parallels between trust and distrust, emphasising the significance of situations in which both trust and distrust would be an imposition upon the (dis)trustee. I develop an account of both trust and distrust in terms of commitment, and argue that this enables us to understand the nature of trustworthiness. Note that this article is available open access on the journal website.
    Download  
     
    Export citation  
     
    Bookmark   145 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • (4 other versions)Is Justified True Belief Knowledge?Edmund Gettier - 1963 - Analysis 23 (6):121-123.
    Edmund Gettier is Professor Emeritus at the University of Massachusetts, Amherst. This short piece, published in 1963, seemed to many decisively to refute an otherwise attractive analysis of knowledge. It stimulated a renewed effort, still ongoing, to clarify exactly what knowledge comprises.
    Download  
     
    Export citation  
     
    Bookmark   1227 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   617 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   226 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   197 citations  
  • Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism.Juan M. Durán & Nico Formanek - 2018 - Minds and Machines 28 (4):645-666.
    Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations :483–496, 2009; Morrison in Philos Stud 143:33–57, 2009), the nature of computer data Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Trust and Power.Niklas Luhmann - 1982 - Studies in Soviet Thought 23 (3):266-270.
    Download  
     
    Export citation  
     
    Bookmark   167 citations  
  • Trustworthiness.Russell Hardin - 1996 - Ethics 107 (1):26-42.
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Principles of Biomedical Ethics.Ezekiel J. Emanuel, Tom L. Beauchamp & James F. Childress - 1995 - Hastings Center Report 25 (4):37.
    Book reviewed in this article: Principles of Biomedical Ethics. By Tom L. Beauchamp and James F. Childress.
    Download  
     
    Export citation  
     
    Bookmark   2267 citations  
  • Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • You Can Trust the Ladder, But You Shouldn't.Jonathan Tallant - 2019 - Theoria 85 (2):102-118.
    My claim in this article is that, contra what I take to be the orthodoxy in the wider literature, we do trust inanimate objects – per the example in the title, there are cases where people really do trust a ladder (to hold their weight, for instance), and, perhaps most importantly, that this poses a challenge to that orthodoxy. My argument consists of four parts. In Section 2 I introduce an alleged distinction between trust as mere reliance and trust as (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Towards a pragmatist dealing with algorithmic bias in medical machine learning.Georg Starke, Eva De Clercq & Bernice S. Elger - 2021 - Medicine, Health Care and Philosophy 24 (3):341-349.
    Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust as noncognitive security about motives.Lawrence C. Becker - 1996 - Ethics 107 (1):43-61.
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • (1 other version)Trust, Distrust and Commitment.Katherine Hawley - 2012 - Noûs 48 (1):1-20.
    Download  
     
    Export citation  
     
    Bookmark   136 citations  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Bioethics, Volume 36, Issue 2, Page 154-161, February 2022.
    Download  
     
    Export citation  
     
    Bookmark   10 citations