Switch to: References

Add citations

You must login to add citations.
  1. There Is No Techno-Responsibility Gap.Daniel W. Tigard - forthcoming - Philosophy and Technology:1-19.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Oppressive Things.Shen‐yi Liao & Bryce Huebner - forthcoming - Philosophy and Phenomenological Research.
    In analyzing oppressive systems like racism, social theorists have articulated accounts of the dynamic interaction and mutual dependence between psychological components, such as individuals’ patterns of thought and action, and social components, such as formal institutions and informal interactions. We argue for the further inclusion of physical components, such as material artifacts and spatial environments. Drawing on socially situated and ecologically embedded approaches in the cognitive sciences, we argue that physical components of racism are not only shaped by, but also (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - forthcoming - Science and Engineering Ethics:1-20.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Other Minds, Other Intelligences: The Problem of Attributing Agency to Machines.Sven Nyholm - 2019 - Cambridge Quarterly of Healthcare Ethics 28 (4):592-598.
    John Harris discusses the problem of other minds, not as it relates to other human minds, but rather as it relates to artificial intelligences. He also discusses what might be called bilateral mind-reading: humans trying to read the minds of artificial intelligences and artificial intelligences trying to read the minds of humans. Lastly, Harris discusses whether super intelligent AI – if it could be created – should be afforded moral consideration, and also how we might convince super intelligent AI that (...)
    Download  
     
    Export citation  
     
    Bookmark