Algorithm exploitation: humans are keen to exploit benevolent AI

iScience 24 (6):102679 (2021)
Download Edit this record How to cite View on PhilPapers
Abstract
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.
Categories
(categorize this paper)
PhilPapers/Archive ID
KARAEH
Upload history
Archival date: 2021-06-23
View other versions
Added to PP index
2021-06-23

Total views
20 ( #63,183 of 64,251 )

Recent downloads (6 months)
20 ( #32,462 of 64,251 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.