A pluralist hybrid model for moral AIs

AI and Society:1-10 (forthcoming)
  Copy   BIBTEX

Abstract

With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an A.I. based on the pluralist hybrid approach consists of two systems. The first is a deterministic algorithm system that incorporates different moral rules for making explicit moral decisions. The second is a machine learning system that calculates the value of the variables required by the application of moral principles. The pluralist hybrid system is better than the existing proposals as (i) it better addresses the moral disagreement problem of the top-down approach by including different moral principles, while (ii) it reduces the opacity of ethical decision-making compared with a bottom-up system by implementing explicit moral principles for moral decision making.

Author Profiles

Fei Song
Lingnan University
Felix S. H. Yeung
University of Essex

Analytics

Added to PP
2022-12-05

Downloads
656 (#32,576)

6 months
199 (#13,565)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?