The emergence of “truth machines”?: Artificial intelligence approaches to lie detection

Download Edit this record How to cite View on PhilPapers
Abstract
This article analyzes emerging artificial intelligence -enhanced lie detection systems from ethical and human resource management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles of human test administrators and human subjects, adding machine learning-based AI agents to the situation and establishing invasive data collection processes as well as introducing certain biases in results. I project that the potentials for pervasive and continuous lie detection initiatives are substantial, displacing human-centered efforts to establish trust and foster integrity in organizations. I argue that if it is possible for HR managers to do so, they should cease using technologically-based lie detection systems entirely and work to foster trust and accountability on a human scale. However, if these AI-enhanced technologies are put into place by organizations by law, agency mandate, or other compulsory measures, care should be taken that the impacts of the technologies on human rights and wellbeing are considered. The article explores how AI can displace the human agent in some aspects of lie detection and credibility assessment scenarios, expanding the prospects for inscrutable, “black box” processes and novel physiological constructs that may increase the potential for such human rights concerns as fairness, mental privacy, and bias. Employee interactions with autonomous lie detection systems rather with than human beings who administer specific tests can reframe organizational processes and rules concerning the assessment of personal honesty and integrity. The dystopian projection of organizational life in which analyses and judgments of the honesty of one’s utterances are made automatically and in conjunction with one’s personal profile provides unsettling prospects for the autonomy of self-representation.
Keywords
No keywords specified (fix it)
Categories
No categories specified
(categorize this paper)
ISBN(s)
PhilPapers/Archive ID
ORATEO-2
Upload history
Archival date: 2022-07-30
View other versions
Added to PP index
2022-02-01

Total views
34 ( #67,624 of 71,180 )

Recent downloads (6 months)
34 ( #24,543 of 71,180 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.