Switch to: References

Add citations

You must login to add citations.
  1. The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Icon and the Idol: A Christian Perspective on Sociable Robots.Jordan Joseph Wales - 2023 - In Jens Zimmermann (ed.), Human Flourishing in a Technological World: A Theological Perspective. Oxford University Press. pp. 94-115.
    Consulting early and medieval Christian thinkers, I theologically analyze the question of how we are to construe and live well with the sociable robot under the ancient theological concept of “glory”—the manifestation of God’s nature and life outside of himself. First, the oft-noted Western wariness toward robots may in part be rooted in protecting a certain idea of the “person” as a relational subject capable of self-gift. Historically, this understanding of the person derived from Christian belief in God the Trinity, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Idea of Degrees of Moral Status.Dick Timmer - forthcoming - Journal of Value Inquiry:1-19.
    A central question in contemporary ethics and political philosophy concerns which entities have moral status. In this article, I provide a detailed analysis of the view that moral status comes in degrees. I argue that degrees of moral status can be specified along two dimensions: (i) the weight of the reason to protect an entity’s morally significant rights and interests; and/or (ii) the rights and interests that are considered morally significant. And I explore some of the complexities that arise when (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can we design artificial persons without being manipulative?Maciej Musiał - forthcoming - AI and Society:1-10.
    If we could build artificial persons with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, then developing such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The African Relational Account of Social Robots: a Step Back?John-Stewart Gordon - 2022 - Philosophy and Technology 35 (2):1-6.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark