How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents

Behavioral Sciences 13 (6):470 (2023)
  Copy   BIBTEX

Abstract

The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, to make assessments. Employing information-processing-based Bayesian Mindsponge Framework (BMF) analytics on a dataset of 266 residents in the United States, we found that the more people believe that an AI agent seeks continued functioning, the more they believe in that AI agent’s capability of having a mind of its own. Moreover, we also found that the above association becomes stronger if a person is more familiar with personally interacting with AI. This suggests a directional pattern of value reinforcement in perceptions of AI. As the information processing of AI becomes even more sophisticated in the future, it will be much harder to set clear boundaries about what it means to have an autonomous mind.

Author Profiles

Analytics

Added to PP
2023-06-13

Downloads
248 (#55,490)

6 months
195 (#10,682)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?