Can a robot lie?

Download Edit this record How to cite View on PhilPapers
Abstract
The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three questions: (i) Are ordinary people willing to ascribe intentions to deceive to artificial agents? (ii) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (iii) Do they blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than it presently attracts.
PhilPapers/Archive ID
KNECAR
Upload history
Archival date: 2020-10-27
View other versions
Added to PP index
2020-10-27

Total views
148 ( #36,489 of 2,448,365 )

Recent downloads (6 months)
32 ( #20,677 of 2,448,365 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.