Switch to: References

Add citations

You must login to add citations.
  1. Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - forthcoming - Medicine, Health Care and Philosophy:1-10.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - 2024 - Medicine, Health Care and Philosophy 27 (4):513-528.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Plea for (In)Human-centred AI.Matthias Braun & Darian Meacham - 2024 - Philosophy and Technology 37 (3):1-21.
    In this article, we use the account of the “inhuman” that is developed in the work of the French philosopher Jean-François Lyotard to develop a critique of human-centred AI. We argue that Lyotard’s philosophy not only provides resources for a negative critique of human-centred AI discourse, but also contains inspiration for a more constructive account of how the discourse around human-centred AI can take a broader view of the human that includes key dimensions of Lyotard’s inhuman, namely performativity, vulnerability, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on trust (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI.Fei Song - 2023 - Asian Journal of Philosophy 2 (2):1-8.
    Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Contestable AI by Design: Towards a Framework.Kars Alfrink, Ianus Keller, Gerd Kortuem & Neelke Doorn - 2023 - Minds and Machines 33 (4):613-639.
    As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the potentials that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation