AI Can Help Us Live More Deliberately

MIT Sloan Management Review 60 (4) (2019)
  Copy   BIBTEX

Abstract

Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation from false news, deceptive advertising, and political rhetoric. In this article, I consider the overarching features of this problem and provide a framework to help AI designers tackle it through system enhancements in smartphones and other products and services in the burgeoning internet of things (IoT) marketplace. The framework is informed by two ideas: psychologist Daniel Kahneman’s cognitive dual process theory and moral self-awareness theory, a four-level model of moral identity that I developed with Benjamin M. Cole.

Author's Profile

Julian Friedland
Metropolitan State University of Denver

Analytics

Added to PP
2019-08-31

Downloads
889 (#14,436)

6 months
190 (#13,165)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?