Switch to: References

Add citations

You must login to add citations.
  1. Trusting the (ro)botic other.Paul B. de Laat - 2015 - Acm Sigcas Computers and Society 45 (3):255-260.
    How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The relationships among consumers’ ethical ideology, risk aversion and ethically-based distrust of online retailers and the moderating role of consumers’ need for personal interaction.Isabel P. Riquelme & Sergio Román - 2014 - Ethics and Information Technology 16 (2):135-155.
    Consumer distrust is only recently beginning to be perceived as an important e-commerce issue and, unlike online trust, the nature and role of distrust is much less established. This study examines the influence of two important consumer characteristics on consumer’s ethically-based distrust of online retailers. Also, the moderating role of consumer’s need for personal contact with sales staff is tested. Results from 409 online consumers confirm that both relativist-based ethical ideology and risk aversion are strongly and positively related to consumers’ (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.Christian Reuter, Thea Riebe & Stefka Schmid - 2022 - Science and Engineering Ethics 28 (2):1-23.
    Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for responsible research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robotrust and Legal Responsibility.Ugo Pagallo - 2010 - Knowledge, Technology & Policy 23 (3):367-379.
    The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a new (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral responsibility for computing artifacts: the rules and issues of trust.F. S. Grodzinsky, K. Miller & M. J. Wolf - 2012 - Acm Sigcas Computers and Society 42 (2):15-25.
    "The Rules" are found in a collaborative document that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules. Unlike codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Developing Automated Deceptions and the Impact on Trust.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2015 - Philosophy and Technology 28 (1):91-105.
    As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Organizational trust in a networked world.Luca Giustiniano & Francesco Bolici - 2012 - Journal of Information, Communication and Ethics in Society 10 (3):187-202.
    PurposeTrust is a social factor at the foundations of human action. The pervasiveness of trust explains why it has been studied by a large variety of disciplines, and its complexity justifies the difficulties in reaching a shared understanding and definition. As for all the social factors, trust is continuously evolving as a result of the changes in social, economic and technological conditions. The internet and many other Information and Communication Technologies (ICT) solutions have changed organizational and social life. Such mutated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations