Switch to: References

Add citations

You must login to add citations.
  1. Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Recent Developments in Computing and Philosophy.Anthony F. Beavers - 2011 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 42 (2):385-397.
    Because the label "computing and philosophy" can seem like an ad hoc attempt to tie computing to philosophy, it is important to explain why it is not, what it studies (or does) and how it differs from research in, say, "computing and history," or "computing and biology". The American Association for History and Computing is "dedicated to the reasonable and productive marriage of history and computer technology for teaching, researching and representing history through scholarship and public history" (http://theaahc.org). More pervasive, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Case of Online Trust.Matteo Turilli, Mariarosaria Taddeo & Antonino Vaccaro - 2010 - Knowledge, Technology & Policy 23 (3-4):333-345.
    This paper contributes to the debate on online trust addressing the problem of whether an online environment satisfies the necessary conditions for the emergence of trust. The paper defends the thesis that online environments can foster trust, and it does so in three steps. Firstly, the arguments proposed by the detractors of online trust are presented and analysed. Secondly, it is argued that trust can emerge in uncertain and risky environments and that it is possible to trust online identities when (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Computer Ethics as a Field of Applied Ethics.Herman T. Tavani - 2012 - Journal of Information Ethics 21 (2):52-70.
    The present essay includes an overview of key milestones in the development of computer ethics as a field of applied ethics. It also describes the ongoing debate about the proper scope of CE, as a subfield both in applied ethics and computer science. Following a brief description of the cluster of ethical issues that CE scholars and practitioners have generally considered to be the standard or "mainstream" issues comprising the field thus far, the essay speculates about the future direction of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The case for e-trust.Mariarosaria Taddeo & Luciano Floridi - 2011 - Ethics and Information Technology 13 (1):1–3.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Trust in Technology: A Distinctive and a Problematic Relation. [REVIEW]Mariarosaria Taddeo - 2010 - Knowledge, Technology & Policy 23 (3):283-286.
    The use of tools and artefacts is a distinctive and problematic phenomenon in the history of humanity, and as such it has been a topic of discussion since the beginning of Western culture, from the myths of the Ancient Greek through Humanism and Romanticism to Heidegger. Several questionable aspects have been brought to the fore: the relation between technology and arts, the effects of the use of technology both on the world and on the user and the nature of the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Competence and Trust in Choice Architecture.Evan Selinger & Kyle Powys Whyte - 2010 - Knowledge, Technology & Policy 23 (3-4):461-482.
    Richard Thaler and Cass Sunstein’s Nudge advances a theory of how designers can improve decision-making in various situations where people have to make choices. We claim that the moral acceptability of nudges hinges in part on whether they can provide an account of the competence required to offer nudges, an account that would serve to warrant our general trust in choice architects. What needs to be considered, on a methodological level, is whether they have clarified the competence required for choice (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Competence and Trust in Choice Architecture.Evan Selinger & Kyle Powys Whyte - 2010 - Knowledge, Technology & Policy 23 (3):461-482.
    Richard Thaler and Cass Sunstein’s Nudge advances a theory of how designers can improve decision-making in various situations where people have to make choices. We claim that the moral acceptability of nudges hinges in part on whether they can provide an account of the competence required to offer nudges, an account that would serve to warrant our general trust in choice architects. What needs to be considered, on a methodological level, is whether they have clarified the competence required for choice (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Explanation and trust: what to tell the user in security and AI? [REVIEW]Wolter Pieters - 2011 - Ethics and Information Technology 13 (1):53-64.
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of computing science. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robotrust and Legal Responsibility.Ugo Pagallo - 2010 - Knowledge, Technology & Policy 23 (3):367-379.
    The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a new (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics in e-trust and e-trustworthiness: the case of direct computer-patient interfaces.Philip J. Nickel - 2011 - Ethics and Information Technology 13 (2):355-363.
    In this paper, I examine the ethics of e - trust and e - trustworthiness in the context of health care, looking at direct computer-patient interfaces (DCPIs), information systems that provide medical information, diagnosis, advice, consenting and/or treatment directly to patients without clinicians as intermediaries. Designers, manufacturers and deployers of such systems have an ethical obligation to provide evidence of their trustworthiness to users. My argument for this claim is based on evidentialism about trust and trustworthiness: the idea that trust (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Developing Automated Deceptions and the Impact on Trust.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2015 - Philosophy and Technology 28 (1):91-105.
    As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Organizational trust in a networked world.Luca Giustiniano & Francesco Bolici - 2012 - Journal of Information, Communication and Ethics in Society 10 (3):187-202.
    PurposeTrust is a social factor at the foundations of human action. The pervasiveness of trust explains why it has been studied by a large variety of disciplines, and its complexity justifies the difficulties in reaching a shared understanding and definition. As for all the social factors, trust is continuously evolving as a result of the changes in social, economic and technological conditions. The internet and many other Information and Communication Technologies (ICT) solutions have changed organizational and social life. Such mutated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Distributed morality in an information society.Luciano Floridi - 2013 - Science and Engineering Ethics 19 (3):727-743.
    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2022 - Neuroethics 15 (1):1-12.
    Contemporary ethical analysis of Artificial Intelligence is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations