Switch to: Citations

Add references

You must login to add references.
  1. Materializing Morality: Design Ethics and Technological Mediation.Peter-Paul Verbeek - 2006 - Science, Technology, and Human Values 31 (3):361-380.
    During the past decade, the “script” concept, indicating how technologies prescribe human actions, has acquired a central place in STS. Until now, the concept has mainly functioned in descriptive settings. This article will deploy it in a normative setting. When technologies coshape human actions, they give material answers to the ethical question of how to act. This implies that engineers are doing “ethics by other means”: they materialize morality. The article will explore the implications of this insight for engineering ethics. (...)
    Download  
     
    Export citation  
     
    Bookmark   104 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   295 citations  
  • AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Levels of abstraction and the Turing test.Luciano Floridi - 2010 - Kybernetes 39 (3):423-440.
    An important lesson that philosophy can learn from the Turing Test and computer science more generally concerns the careful use of the method of Levels of Abstraction (LoA). In this paper, the method is first briefly summarised. The constituents of the method are “observables”, collected together and moderated by predicates restraining their “behaviour”. The resulting collection of sets of observables is called a “gradient of abstractions” and it formalises the minimum consistency conditions that the chosen abstractions must satisfy. Two useful (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • (1 other version)Artificial evil and the foundation of computer ethics.L. Floridi & J. Sanders - 2000 - Etica E Politica 2 (2).
    Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations