Switch to: References

Add citations

You must login to add citations.
  1. Oldies but goldies? Comparing the trustworthiness and credibility of ‘new’ and ‘old’ information intermediaries.Lisa Weidmüller & Sven Engesser - forthcoming - Communications.
    People increasingly access news through ‘new’, algorithmic intermediaries such as search engines or aggregators rather than the ‘old’ (i. e., traditional), journalistic intermediaries. As algorithmic intermediaries do not adhere to journalistic standards, their trustworthiness comes into question. With this study, we (1) summarize the differences between journalistic and algorithmic intermediaries as found in previous literature; (2) conduct a cross-media comparison of information credibility and intermediary trustworthiness; and (3) examine how key predictors (such as modality, reputation, source attribution, and prior experience) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence.Simone Grassini - 2023 - Frontiers in Psychology 14:1191628.
    The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals’ beliefs about AI’s influence on their lives, careers, and humanity overall. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust and robotics: a multi-staged decision-making approach to robots in community.Wenxi Zhang, Willow Wong & Mark Findlay - 2024 - AI and Society 39 (5):2463-2478.
    With the desired outcome of social good within the wider robotics ecosystem, trust is identified as the central adhesive of the human–robot interaction (HRI) interface. However, building trust between humans and robots involves more than improving the machine’s technical reliability or trustworthiness in function. This paper presents a holistic, community-based approach to trust-building, where trust is understood as a multifaceted and multi-staged looped relation that depends heavily on context and human perceptions. Building on past literature that identifies dispositional and learned (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety.Linda Miller, Johannes Kraus, Franziska Babel & Martin Baumann - 2021 - Frontiers in Psychology 12:592711.
    With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Feedback and Direction Sources Influence Navigation Decision Making on Experienced Routes.Yu Li, Weijia Li, Yingying Yang & Qi Wang - 2019 - Frontiers in Psychology 10.
    When navigating in a new environment, it is typical for people to resort to external guidance such as GPS, or people. However, in the real world, even though navigators have learned the route, they may still prefer to travel with external guidance. We explored how the availability of feedback and the source of external guidance affect navigation decision-making on experienced routes in the presence of external guidance. In three experiments, participants navigated a simulated route three times and then verbally confirmed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Psychological Effects of the Allocation Process in Human–Robot Interaction – A Model for Research on ad hoc Task Allocation.Alina Tausch, Annette Kluge & Lars Adolph - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science.Kimberly Stowers, Lisa L. Brady, Christopher MacLellan, Ryan Wohleber & Eduardo Salas - 2021 - Frontiers in Psychology 12.
    In response to calls for research to improve human-machine teaming, we present a “perspective” paper that explores techniques from computer science that can enhance machine agents for human-machine teams. As part of this paper, we summarize the state of the science on critical team competencies identified for effective HMT, discuss technological gaps preventing machines from fully realizing these competencies, and identify ways that emerging artificial intelligence capabilities may address these gaps and enhance performance in HMT. We extend beyond extant literature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot.Darci Gallimore, Joseph B. Lyons, Thy Vo, Sean Mahoney & Kevin T. Wynne - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online.Atte Oksanen, Nina Savela, Rita Latikka & Aki Koivula - 2020 - Frontiers in Psychology 11.
    Robotization and artificial intelligence are expected to change societies profoundly. Trust is an important factor of human–technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that were described as either AI or robots. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropomorphism in social robotics: empirical results on human–robot interaction in hybrid production workplaces.Anja Richert, Sarah Müller, Stefan Schröder & Sabina Jeschke - 2018 - AI and Society 33 (3):413-424.
    New forms of artificial intelligence on the one hand and the ubiquitous networking of “everything with everything” on the other hand characterize the fourth industrial revolution. This results in a changed understanding of human–machine interaction, in new models for production, in which man and machine together with virtual agents form hybrid teams. The empirical study “Socializing with robots” aims to gain insight especially into conditions of development and processes of hybrid human–machine teams. In the experiment, human–robot actions and interactions were (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments.Jinchao Lin, April Rose Panganiban, Gerald Matthews, Katey Gibbins, Emily Ankeney, Carlie See, Rachel Bailey & Michael Long - 2022 - Frontiers in Psychology 13.
    Effective human–robot teaming increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting autonomous vehicles as moral agents improves related policy support.Kristin F. Hurst & Nicole D. Sintov - 2022 - Frontiers in Psychology 13.
    Compared to human-operated vehicles, autonomous vehicles offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment, the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology integrity predicts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Driving Into the Future.P. A. Hancock - 2020 - Frontiers in Psychology 11.
    This work considers the future of driving in terms of both its short- and long-term horizons. It conjectures that human-controlled driving will follow in the footsteps of a wide swath of other, now either residual or abandoned human occupations. Pursuits that have preceded it into oblivion. In this way, driving will dwindle down into only a few niche locales wherein enthusiasts will still persist, much in the way that steam train hobbyists now continue their own aspirational inclinations. Of course, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reducing Cognitive Load and Improving Warfighter Problem Solving With Intelligent Virtual Assistants.Celso M. de Melo, Kangsoo Kim, Nahal Norouzi, Gerd Bruder & Gregory Welch - 2020 - Frontiers in Psychology 11:554706.
    Recent times have seen increasing interest in conversational assistants (e.g., Amazon Alexa) designed to help users in their daily tasks. In military settings, it is critical to design assistants that are, simultaneously, helpful and able to minimize the user’s cognitive load. Here we show that embodiment plays a key role in achieving that goal. We present an experiment where participants engaged in the desert survival task in augmented reality. Participants were paired with a voice assistant, an embodied assistant, or no (...)
    Download  
     
    Export citation  
     
    Bookmark