AI-Related Misdirection Awareness in AIVR

Abstract

Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic agency – especially to counteract both intentional misdirection by unethical actors and unintentional epistemic self-sabotage. In this paper, we expound on a new avenue of utilizing AIVR tools to advance an AI-related misdirection awareness of humans in the deepfake era. Firstly, we harness scientific knowledge from the psychology and neuroscience of magic where the study of misdirection techniques is center stage. Secondly, we connect the latter to creativity research linking human creative potential to inspiration from the seemingly impossible. Overall, AIVR could become an empowering experiential testbed for human epistemic agency enabling a better rational evaluation of AI capabilities. However, a misuse of the same type of tools could yield AIVR safety risks if not counteracted preemptively.

Author's Profile

Analytics

Added to PP
2024-06-03

Downloads
199 (#84,219)

6 months
199 (#12,721)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?