Switch to: Citations

Add references

You must login to add references.
  1. Sven Nyholm, Humans and Robots; Ethics, Agency and Anthropomorphism.Lydia Farina - 2022 - Journal of Moral Philosophy 19 (2):221-224.
    How should human beings and robots interact with one another? Nyholm’s answer to this question is given below in the form of a conditional: If a robot looks or behaves like an animal or a human being then we should treat them with a degree of moral consideration (p. 201). Although this is not a novel claim in the literature on ai ethics, what is new is the reason Nyholm gives to support this claim; we should treat robots that look (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust in technology: interlocking trust concepts for privacy respecting video surveillance.Sebastian Weydner-Volkmann & Linus Feiten - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):506-520.
    Purpose The purpose of this paper is to defend the notion of “trust in technology” against the philosophical view that this concept is misled and unsuitable for ethical evaluation. In contrast, it is shown that “trustworthy technology” addresses a critical societal need in the digital age as it is inclusive of IT-security risks not only from a technical but also from a public layperson perspective. Design/methodology/approach From an interdisciplinary perspective between philosophy andIT-security, the authors discuss a potential instantiation of a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Trust and Distributed Epistemic Labor‎.Boaz Miller & Ori Freiman - 2019 - In Judith Simon (ed.), The Routledge Handbook of Trust and Philosophy. Routledge. pp. ‎341-353‎.
    This chapter explores properties that bind individuals, knowledge, and communities, together. Section ‎‎1 introduces Hardwig’s argument from trust in others’ testimonies as entailing that trust is the glue ‎that binds individuals into communities. Section 2 asks “what grounds trust?” by exploring assessment ‎of collaborators’ explanatory responsiveness, formal indicators such as affiliation and credibility, ‎appreciation of peers’ tacit knowledge, game-theoretical considerations, and the role moral character ‎of peers, social biases, and social values play in grounding trust. Section 3 deals with establishing (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  • Humans and Robots: Ethics, Agency, and Anthropomorphism.Sven Nyholm - 2020 - Rowman & Littlefield International.
    This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Can Artificial Entities Assert?Ori Freiman & Boaz Miller - 2018 - In Sanford C. Goldberg (ed.), The Oxford Handbook of Assertion. Oxford University Press. pp. 415-436.
    There is an existing debate regarding the view that technological instruments, devices, or machines can assert ‎or testify. A standard view in epistemology is that only humans can testify. However, the notion of quasi-‎testimony acknowledges that technological devices can assert or testify under some conditions, without ‎denying that humans and machines are not the same. Indeed, there are four relevant differences between ‎humans and instruments. First, unlike humans, machine assertion is not imaginative or playful. Second, ‎machine assertion is prescripted and (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • (1 other version)Translating principles into practices of digital ethics: five risks of being unethical.Luciano Floridi - 2019 - Philosophy and Technology 32 (2):185-193.
    Modern digital technologies—from web-based services to Artificial Intelligence (AI) solutions—increasingly affect the daily lives of billions of people. Such innovation brings huge opportunities, but also concerns about design, development, and deployment of digital technologies. This article identifies and discusses five clusters of risk in the international debate about digital ethics: ethics shopping; ethics bluewashing; ethics lobbying; ethics dumping; and ethics shirking.
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • The Oxford Handbook of Ethics of Ai.Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.) - 2020 - Oxford Handbooks.
    This 44-chapter volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • You Can Trust the Ladder, But You Shouldn't.Jonathan Tallant - 2019 - Theoria 85 (2):102-118.
    My claim in this article is that, contra what I take to be the orthodoxy in the wider literature, we do trust inanimate objects – per the example in the title, there are cases where people really do trust a ladder (to hold their weight, for instance), and, perhaps most importantly, that this poses a challenge to that orthodoxy. My argument consists of four parts. In Section 2 I introduce an alleged distinction between trust as mere reliance and trust as (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Can We Make Sense of the Notion of Trustworthy Technology?Philip J. Nickel, Maarten Franssen & Peter Kroes - 2010 - Knowledge, Technology & Policy 23 (3):429-444.
    In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Trust in technological systems.Philip J. Nickel - 2013 - In M. J. de Vries, S. O. Hansson & A. W. M. Meijers (eds.), Norms in technology: Philosophy of Engineering and Technology, Vol. 9. Springer.
    Technology is a practically indispensible means for satisfying one’s basic interests in all central areas of human life including nutrition, habitation, health care, entertainment, transportation, and social interaction. It is impossible for any one person, even a well-trained scientist or engineer, to know enough about how technology works in these different areas to make a calculated choice about whether to rely on the vast majority of the technologies she/he in fact relies upon. Yet, there are substantial risks, uncertainties, and unforeseen (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
    In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.
    Download  
     
    Export citation  
     
    Bookmark   316 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   621 citations  
  • Trustworthiness.Karen Jones - 2012 - Ethics 123 (1):61-85.
    I present and defend an account of three-place trustworthiness according to which B is trustworthy with respect to A in domain of interaction D, if and only if she is competent with respect to that domain, and she would take the fact that A is counting on her, were A to do so in this domain, to be a compelling reason for acting as counted on. This is not the whole story of trustworthiness, however, for we want those we can (...)
    Download  
     
    Export citation  
     
    Bookmark   84 citations  
  • A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Qu'est-ce que la confiance?Gloria Origgi - 2008 - Librairie Philosophique Vrin.
    La notion de confiance est ici examinée dans des dimensions à la fois personnelle, morale, scientifique et politique. Avec des textes de A. Baier et D. Hume.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)The Street-Level Epistemology of Trust.Russell Hardin - 1992 - Analyse & Kritik 14 (2):152-176.
    Rational choice and other accounts of trust base it in objective assessments of the risks and benefits of trusting. But rational subjects must choose in the light of what knowledge they have, and that knowledge determines their capacities for trust. This is an epistemological issue, but not at the usual level of the philosophy of knowledge. Rather, it is an issue of pragmatic rationality for a given actor. It is commonly argued that trust is inherently embedded in iterated, thick relationships. (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • It’s Not About Technology.Joseph C. Pitt - 2010 - Knowledge, Technology & Policy 23 (3):445-454.
    It is argued that the question “Can we trust technology?” is unanswerable because it is open-ended. Only questions about specific issues that can have specific answers should be entertained. It is further argued that the reason the question cannot be answered is that there is no such thing as Technology _simpliciter_. Fundamentally, the question comes down to trusting people and even then, the question has to be specific about trusting a person to do this or that.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Trust on the line: a philosophical exploration of trust in the networked era.Esther Keymolen - 2016 - Oisterwijk, Netherlands: Wolf Legal Publishers.
    Governments, companies, and citizens all think trust is important. Especially today, in the networked era, where we make use of all sorts of e-services and increasingly interact and buy online, trust has become a necessary condition for society to thrive. But what do we mean when we talk about trust and how does the rise of the Internet transform the functioning of trust? This books starts off with a thorough conceptual analysis of trust, drawing on insights from - amongst others (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Bias and values in scientific research.Torsten Wilholt - 2009 - Studies in History and Philosophy of Science Part A 40 (1):92-101.
    When interests and preferences of researchers or their sponsors cause bias in experimental design, data interpretation or dissemination of research results, we normally think of it as an epistemic shortcoming. But as a result of the debate on science and values, the idea that all extra-scientific influences on research could be singled out and separated from pure science is now widely believed to be an illusion. I argue that nonetheless, there are cases in which research is rightfully regarded as epistemologically (...)
    Download  
     
    Export citation  
     
    Bookmark   126 citations  
  • Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - 2021 - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable Ai. MIT Press.
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation