Switch to: References

Add citations

You must login to add citations.
  1. On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to do (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Service and personal care robots are starting to cross the threshold into the wilderness of everyday life, where they are supposed to interact with inexperienced lay users in a changing environment. In order to function as intended, robots must become independent entities that monitor themselves and improve their own behaviours based on learning outcomes in practice. This poses a great challenge to robotics, which we are calling the “autonomy-safety-paradox” (ASP). The integration of robot applications into society requires the reconciliation of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations