Switch to: Citations

Add references

You must login to add references.
  1. Defining Trust and E-trust: Old Theories and New Problems.Mariarosaria Taddeo - 2009 - International Journal of Technology and Human Interaction (IJTHI) Official Publication of the Information Resources Management Association 5 (2):23-35.
    The paper provides a selective analysis of the main theories of trust and e-trust (that is, trust in digital environments) provided in the last twenty years, with the goal of preparing the ground for a new philosophical approach to solve the problems facing them. It is divided into two parts. The first part is functional toward the analysis of e-trust: it focuses on trust and its definition and foundation and describes the general background on which the analysis of e-trust rests. (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • The method of levels of abstraction.Luciano Floridi - 2008 - Minds and Machines 18 (3):303–329.
    The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological method of levels of abstraction is clarified. In section “A Classic Application of the Method ofion”, the philosophical fruitfulness of the new (...)
    Download  
     
    Export citation  
     
    Bookmark   124 citations  
  • Legal personhood for artificial intelligences.Lawrence B. Solum - 1992 - North Carolina Law Review 70:1231.
    Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Simulating rational social normative trust, predictive trust, and predictive reliance between agents.Maj Tuomela & Solveig Hofmann - 2003 - Ethics and Information Technology 5 (3):163-176.
    A program for the simulation of rational social normative trust, predictive `trust,' and predictive reliance between agents will be introduced. It offers a tool for social scientists or a trust component for multi-agent simulations/multi-agent systems, which need to include trust between agents to guide the decisions about the course of action. It is based on an analysis of rational social normative trust (RSNTR) (revised version of M. Tuomela 2002), which is presented and briefly argued. For collective agents, belief conditions for (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   295 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • How just could a robot war be?Peter Asaro - 2008 - In P. Brey, A. Briggle & K. Waelbers (eds.), Current Issues in Computing and Philosophy. IOS Press. pp. 50--64.
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  • (1 other version)Guilty Robots, Happy Dogs: The Question of Alien Minds.David McFarland - 2008 - Oxford University Press.
    Do animals have thoughts and feelings? Could robots have minds like our own? Can we ever know, or will the answer be forever out of our reach? David McFarland explores the answers to these questions, drawing not only on the philosophy of mind, but also on developments in artificial intelligence, robots, and the science of animal behaviour.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Trust and Power.Niklas Luhmann - 1982 - Studies in Soviet Thought 23 (3):266-270.
    Download  
     
    Export citation  
     
    Bookmark   167 citations  
  • (1 other version)Guilty Robots, Happy Dogs: The Question of Alien Minds: The Question of Alien Minds.David McFarland - 2008 - Oxford University Press.
    Do animals have thoughts and feelings? Could robots have minds like our own? Can we ever know, or will the answer be forever out of our reach? David McFarland explores the answers to these questions, drawing not only on the philosophy of mind, but also on developments in artificial intelligence, robots, and the science of animal behaviour.
    Download  
     
    Export citation  
     
    Bookmark   6 citations