Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17 (2022)
  Copy   BIBTEX

Abstract

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

Author's Profile

Markus Kneer
University of Graz

Analytics

Added to PP
2023-01-04

Downloads
784 (#24,309)

6 months
173 (#18,331)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?