Abstract
Questions surrounding engagement with generative AI are often framed in terms of trust, yet mere theorizing about trust may not yield actionable insights, given the multifaceted nature of trust. Literature on trust typically overlooks how individuals make meaning in their interactions with other entities, including AI. This paper reexamines trust with insights from Merleau-Ponty’s views on embodiment, positing trust as a style of world engagement characterized by openness—an attitude wherein individuals enact and give themselves to their lived world, prepared to reorganize their existence. This paper argues that generative AI mediates users’ existence by attuning their openness. Since users perceive generative AI not merely as a tool but as possessing human-like existence, their engagement with AI serves as a rehearsal for articulating and reorganizing their engagement with the world. Consequently, users neither trust nor distrust generative AI; rather, it mediates their trust. This perspective suggests that users’ moral stance towards generative AI involves both other-regarding ethics and information environment ethics. Drawing insights from Kant’s deontology, it proposes that respecting AI’s integrity is equivalent to preserving both our humanity and the integrity of the information environment.