A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

Kritike 12 (1):182–200 (2018)
Download Edit this record How to cite View on PhilPapers
Abstract
This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts.
PhilPapers/Archive ID
BOYACF-2
Upload history
Archival date: 2018-08-24
View other versions
Added to PP index
2018-07-02

Total views
587 ( #9,139 of 2,433,572 )

Recent downloads (6 months)
184 ( #2,936 of 2,433,572 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.