Switch to: Citations

Add references

You must login to add references.
  1. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • AI assisted ethics.Amitai Etzioni & Oren Etzioni - 2016 - Ethics and Information Technology 18 (2):149-156.
    The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument face is how to ensure that these instruments will not engage in unethical conduct. The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   104 citations  
  • (1 other version)The coming technological singularity: How to survive in the post-human era.Vernor Vinge - 1993 - Whole Earth Review.
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Materializing Morality: Design Ethics and Technological Mediation.Peter-Paul Verbeek - 2006 - Science, Technology, and Human Values 31 (3):361-380.
    During the past decade, the “script” concept, indicating how technologies prescribe human actions, has acquired a central place in STS. Until now, the concept has mainly functioned in descriptive settings. This article will deploy it in a normative setting. When technologies coshape human actions, they give material answers to the ethical question of how to act. This implies that engineers are doing “ethics by other means”: they materialize morality. The article will explore the implications of this insight for engineering ethics. (...)
    Download  
     
    Export citation  
     
    Bookmark   104 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • A New History of Ourselves, in the Shadow of our Obsessions and Compulsions.Pierre-Henri Castel, Angela Verdier & Louis Sass - 2014 - Philosophy, Psychiatry, and Psychology 21 (4):299-309.
    Before broaching our main subject, and exploring why, among all disorders of the mind, obsessive-compulsive disorders have a place apart, I would like to start from a dilemma that is well-known to historians interested in mental disorders. According to one approach, a mental illness X is considered as a bona fide or ‘genuine’ illness if, and only if, it originates from a disturbance of the brain. Its neurobiological form is in this case considered as invariant, whatever cultural veneer might give (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The “big red button” is too late: an alternative model for the ethical evaluation of AI systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
    As a way to address both ominous and ordinary threats of artificial intelligence, researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Ethics of responsibilities distributions in a technological culture.Hans Lenk - 2017 - AI and Society 32 (2):219-231.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers.Paul Conway, Jacob Goldstein-Greenwood, David Polacek & Joshua D. Greene - 2018 - Cognition 179 (C):241-265.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • On the ethics of AI ethics.Udo Schuklenk - 2020 - Bioethics 34 (2):146-147.
    Download  
     
    Export citation  
     
    Bookmark   2 citations