Switch to: References

Add citations

You must login to add citations.
  1. Objections to Simpson’s Argument in ‘Robots, Trust and War’.Carol Lord - 2019 - Ethics and Information Technology 21 (3):241-251.
    Download  
     
    Export citation  
     
    Bookmark  
  • How to Describe and Evaluate “Deception” Phenomena: Recasting the Metaphysics, Ethics, and Politics of ICTs in Terms of Magic and Performance and Taking a Relational and Narrative Turn.Mark Coeckelbergh - 2018 - Ethics and Information Technology 20 (2):71-85.
    Contemporary ICTs such as speaking machines and computer games tend to create illusions. Is this ethically problematic? Is it deception? And what kind of “reality” do we presuppose when we talk about illusion in this context? Inspired by work on similarities between ICT design and the art of magic and illusion, responding to literature on deception in robot ethics and related fields, and briefly considering the issue in the context of the history of machines, this paper discusses these questions through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot Rights? Towards a Social-Relational Justification of Moral Consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Epistemological and Moral Problems with Human Enhancement.Fiorella Battaglia & Antonio Carnevale - 2014 - Humana Mente 7 (26).
    Download  
     
    Export citation  
     
    Bookmark  
  • Mind Perception of Robots Varies With Their Economic Versus Social Function.Xijing Wang & Eva G. Krumhuber - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark  
  • Can We Trust Robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Robots: Ethical by Design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Agents, Good Care, and Modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection to using (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations