Switch to: References

Add citations

You must login to add citations.
  1. Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2020 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Neuroenhancement, the Criminal Justice System, and the Problem of Alienation.Jukka Varelius - 2020 - Neuroethics 13 (3):325-335.
    It has been suggested that neuroenhancements could be used to improve the abilities of criminal justice authorities. Judges could be made more able to make adequately informed and unbiased decisions, for example. Yet, while such a prospect appears appealing, the views of neuroenhanced criminal justice authorities could also be alien to the unenhanced public. This could compromise the legitimacy and functioning of the criminal justice system. In this article, I assess possible solutions to this problem. I maintain that none of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Socially Responsive Technologies: Toward a Co-Developmental Path.Daniel W. Tigard, Niël H. Conradie & Saskia K. Nagel - forthcoming - AI and Society.
    Robotic and artificially intelligent systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction, we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or praise at technology itself (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Artificial View: Toward a Non-Anthropocentric Account of Moral Patiency.Fabio Tollon - forthcoming - Ethics and Information Technology.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Rise of Artificial Intelligence and the Crisis of Moral Passivity.Berman Chan - forthcoming - AI and Society:1-3.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all our (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • The Ethics of Crashes with Self‐Driving Cars: A Roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Download  
     
    Export citation  
     
    Bookmark   5 citations