Switch to: Citations

Add references

You must login to add references.
  1. The case for e-trust.Mariarosaria Taddeo & Luciano Floridi - 2011 - Ethics and Information Technology 13 (1):1–3.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Trusting Our Selves to Technology.Asle H. Kiran & Peter-Paul Verbeek - 2010 - Knowledge, Technology & Policy 23 (3):409-427.
    Trust is a central dimension in the relation between human beings and technologies. In many discourses about technology, the relation between human beings and technologies is conceptualized as an external relation: a relation between pre-given entities that can have an impact on each other but that do not mutually constitute each other. From this perspective, relations of trust can vary between _reliance_, as is present for instance in technological extensionism, and _suspicion_, as in various precautionary approaches in ethics that focus (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Trustworthiness.Karen Jones - 2012 - Ethics 123 (1):61-85.
    I present and defend an account of three-place trustworthiness according to which B is trustworthy with respect to A in domain of interaction D, if and only if she is competent with respect to that domain, and she would take the fact that A is counting on her, were A to do so in this domain, to be a compelling reason for acting as counted on. This is not the whole story of trustworthiness, however, for we want those we can (...)
    Download  
     
    Export citation  
     
    Bookmark   82 citations  
  • What Is Trust?Thomas W. Simpson - 2012 - Pacific Philosophical Quarterly 93 (4):550-569.
    Trust is difficult to define. Instead of doing so, I propose that the best way to understand the concept is through a genealogical account. I show how a root notion of trust arises out of some basic features of what it is for humans to live socially, in which we rely on others to act cooperatively. I explore how this concept acquires resonances of hope and threat, and how we analogically apply this in related but different contexts. The genealogical account (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Creating Trust.Robert C. Solomon - 1998 - Business Ethics Quarterly 8 (2):205-232.
    In this essay, we argue that trust is a dynamic emotional relationship which entails responsibility. Trust is not a social substance, a medium, or a mysterious entity but rather a set of social practices, defined by our choices, to trust or not to trust. We discuss the differences and the relationship between trust and trustworthiness, and we distinguish several different kinds or “levels” of trust, simple trust, basic trust, “blind” trust, and authentic trust. We then argue that trust as an (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • The philosophical novelty of computer simulation methods.Paul Humphreys - 2009 - Synthese 169 (3):615 - 626.
    Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
    Download  
     
    Export citation  
     
    Bookmark   137 citations  
  • (1 other version)Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.
    Download  
     
    Export citation  
     
    Bookmark   311 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   613 citations  
  • A Survey of Methods for Explaining Black Box Models.Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti & Dino Pedreschi - 2019 - ACM Computing Surveys 51 (5):1-42.
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Explainable AI: From black box to glass box.A. Rai - 2020 - Journal of the Academy of Marketing Science 48.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   205 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Trust and Accountability in a Digital Age.Onora O'Neill - 2020 - Philosophy 95 (1):3-17.
    I have a very particular reason to be grateful to Stewart Sutherland, our late President, which is connected to some of the themes of this lecture, so want to begin by recalling a long conversation I had with him on these topics.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust as a Public Virtue.Warren Von Eschenbach (ed.) - 2019 - London and New York:
    Western societies are experiencing a crisis of trust: we no longer enjoy high levels of confidence in social institutions and are increasingly skeptical of those holding positions of authority. The crisis of trust, however, seems paradoxical: at the same time we report greater feelings of mistrust or an erosion of trust in institutions and technologies we increasingly entrust our wellbeing and security to these very same technologies and institutions. Analyzing trust not only will help resolve the paradox but suggests that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Can We Make Sense of the Notion of Trustworthy Technology?Philip J. Nickel, Maarten Franssen & Peter Kroes - 2010 - Knowledge, Technology & Policy 23 (3):429-444.
    In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider some examples (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Appraising Black-Boxed Technology: the Positive Prospects.E. S. Dahl - 2018 - Philosophy and Technology 31 (4):571-591.
    One staple of living in our information society is having access to the web. Web-connected devices interpret our queries and retrieve information from the web in response. Today’s web devices even purport to answer our queries directly without requiring us to comb through search results in order to find the information we want. How do we know whether a web device is trustworthy? One way to know is to learn why the device is trustworthy by inspecting its inner workings, 156–170 (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • It’s Not About Technology.Joseph C. Pitt - 2010 - Knowledge, Technology & Policy 23 (3):445-454.
    It is argued that the question “Can we trust technology?” is unanswerable because it is open-ended. Only questions about specific issues that can have specific answers should be entertained. It is further argued that the reason the question cannot be answered is that there is no such thing as Technology _simpliciter_. Fundamentally, the question comes down to trusting people and even then, the question has to be specific about trusting a person to do this or that.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The Cunning of Trust.Philip Pettit - 1995 - Philosophy and Public Affairs 24 (3):202-225.
    Download  
     
    Export citation  
     
    Bookmark   140 citations  
  • The Cunning of Trust.Philip Perth - 1995 - Philosophy and Public Affairs 24 (3):202-225.
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Introduction: the Governance of Algorithms.Marcello D’Agostino & Massimo Durante - 2018 - Philosophy and Technology 31 (4):499-505.
    In our information societies, tasks and decisions are increasingly outsourced to automated systems, machines, and artificial agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. This raises a critical issue: how are algorithmic procedures and applications to be appraised and governed? This question needs to be investigated, if one wishes to avoid the traps of ICTs ending up in isolating humans behind their screens and digital delegates, or harnessing them in a passive role, by (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations