Switch to: Citations

Add references

You must login to add references.
  1. Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   568 citations  
  • Robo- and informationethics: some fundamentals.Michael P. Decker & Mathias Gutmann (eds.) - 2012 - Zürich: Lit.
    This book focuses on some of the most pressing methodological, ethical, and technique-philosophical questions that are connected with the concept of artificial autonomous systems. (Series: Hermeneutics and Anthropology / Hermeneutik und ...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Trust.Carolyn McLeod - 2020 - Stanford Encyclopedia of Philosophy.
    A summary of the philosophical literature on trust.
    Download  
     
    Export citation  
     
    Bookmark   75 citations  
  • Authenticity in the age of digital companions.Sherry Turkle - 2007 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 8 (3):501-517.
    The first generation of children to grow up with electronic toys and games saw computers as our “nearest neighbors.” They spoke of computers as rational machines and of people as emotional machines, a fragile formulation destined to be challenged. By the mid-1990s, computational creatures, including robots, were presenting themselves as “relational artifacts,” beings with feelings and needs. One consequence of this development is a crisis in authenticity in many quarters. In an increasing number of situations, people behave as though they (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The case for e-trust.Mariarosaria Taddeo & Luciano Floridi - 2011 - Ethics and Information Technology 13 (1):1–3.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Trust in Technology: A Distinctive and a Problematic Relation. [REVIEW]Mariarosaria Taddeo - 2010 - Knowledge, Technology & Policy 23 (3):283-286.
    The use of tools and artefacts is a distinctive and problematic phenomenon in the history of humanity, and as such it has been a topic of discussion since the beginning of Western culture, from the myths of the Ancient Greek through Humanism and Romanticism to Heidegger. Several questionable aspects have been brought to the fore: the relation between technology and arts, the effects of the use of technology both on the world and on the user and the nature of the (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
    In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Towards a theory of privacy in the information age.James H. Moor - 1997 - Acm Sigcas Computers and Society 27 (3):27-32.
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   84 citations  
  • What is computer ethics?James H. Moor - 1985 - Metaphilosophy 16 (4):266-275.
    Download  
     
    Export citation  
     
    Bookmark   144 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   288 citations  
  • What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Moral Repair: Reconstructing Moral Relations After Wrongdoing.Margaret Urban Walker - 2006 - Cambridge University Press.
    Moral Repair examines the ethics and moral psychology of responses to wrongdoing. Explaining the emotional bonds and normative expectations that keep human beings responsive to moral standards and responsible to each other, Margaret Urban Walker uses realistic examples of both personal betrayal and political violence to analyze how moral bonds are damaged by serious wrongs and what must be done to repair the damage. Focusing on victims of wrong, their right to validation, and their sense of justice, Walker presents a (...)
    Download  
     
    Export citation  
     
    Bookmark   146 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   180 citations  
  • Robot Ethics: The Ethical and Social Implications of Robotics.Patrick Lin, Keith Abney & George A. Bekey (eds.) - 2011 - MIT Press.
    Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society--and ethics--change with robotics? This volume is the first book to bring together prominent scholars and experts from both science and the humanities to explore these and other questions in this emerging field. Starting (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Defining Trust and E-trust: Old Theories and New Problems.Mariarosaria Taddeo - 2009 - International Journal of Technology and Human Interaction (IJTHI) Official Publication of the Information Resources Management Association 5 (2):23-35.
    The paper provides a selective analysis of the main theories of trust and e-trust (that is, trust in digital environments) provided in the last twenty years, with the goal of preparing the ground for a new philosophical approach to solve the problems facing them. It is divided into two parts. The first part is functional toward the analysis of e-trust: it focuses on trust and its definition and foundation and describes the general background on which the analysis of e-trust rests. (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • The Nature, Importance, and Difficulty of Machine Ethics.James Moor - 2006 - IEEE Intelligent Systems 21:18-21.
    Download  
     
    Export citation  
     
    Bookmark   111 citations