Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and society: a virtue ethics approach.Mirko Farina, Petr Zhdanov, Artur Karimov & Andrea Lavazza - 2024 - AI and Society 39 (3):1127-1140.
    Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2021 - AI and Society (March 2021):1-20.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • African Reasons Why Artificial Intelligence Should Not Maximize Utility.Thaddeus Metz - 2021 - In Beatrice Dedaa Okyere-Manu (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Palgrave-Macmillan. pp. 55-72.
    Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is bad for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments.Ahmed Izzidien - 2022 - AI and Society 37 (1):299-318.
    Programming artificial intelligence to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair or fair. It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Categorization and challenges of utilitarianisms in the context of artificial intelligence.Štěpán Cvik - 2022 - AI and Society 37 (1):291-297.
    The debates about ethics in the context of artificial intelligence have been recently focusing primarily on various types of utilitarianisms. This article suggests a categorization of the various presented utilitarianisms into static utilitarianisms and dynamic utilitarianisms. It explains the main features of both. Then, it presents the challenges the utilitarianisms in each group need to be able to deal with. Since it appears that those cannot be overcome in the context of each group alone, the article suggests a possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark