Switch to: References

Add citations

You must login to add citations.
  1. Towards robots that trust: Human subject validation of the situational conditions for trust.Alan R. Wagner & Paul Robinette - 2015 - Interaction Studies 16 (1):89-117.
    This article investigates the challenge of developing a robot capable of determining if a social situation demands trust. Solving this challenge may allow a robot to react when a person over or under trusts the system. Prior work in this area has focused on understanding the factors that influence a person’s trust of a robot. In contrast, by using game-theoretic representations to frame the problem, we are able to develop a set of conditions for determining if an interactive situation demands (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interactive Team Cognition.Nancy J. Cooke, Jamie C. Gorman, Christopher W. Myers & Jasmine L. Duran - 2013 - Cognitive Science 37 (2):255-285.
    Cognition in work teams has been predominantly understood and explained in terms of shared cognition with a focus on the similarity of static knowledge structures across individual team members. Inspired by the current zeitgeist in cognitive science, as well as by empirical data and pragmatic concerns, we offer an alternative theory of team cognition. Interactive Team Cognition (ITC) theory posits that (1) team cognition is an activity, not a property or a product; (2) team cognition should be measured and studied (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance.Pascal D. Koenig - forthcoming - AI and Society:1-13.
    Evidence on AI acceptance comes from a diverse field comprising public opinion research and largely experimental studies from various disciplines. Differing theoretical approaches in this research, however, imply heterogeneous ways of studying AI acceptance. The present paper provides a framework for systematizing different uses. It identifies three families of theoretical perspectives informing research on AI acceptance—user acceptance, delegation acceptance, and societal adoption acceptance. These models differ in scope, each has elements specific to them, and the connotation of technology acceptance thus (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Oldies but goldies? Comparing the trustworthiness and credibility of ‘new’ and ‘old’ information intermediaries.Lisa Weidmüller & Sven Engesser - forthcoming - Communications.
    People increasingly access news through ‘new’, algorithmic intermediaries such as search engines or aggregators rather than the ‘old’ (i. e., traditional), journalistic intermediaries. As algorithmic intermediaries do not adhere to journalistic standards, their trustworthiness comes into question. With this study, we (1) summarize the differences between journalistic and algorithmic intermediaries as found in previous literature; (2) conduct a cross-media comparison of information credibility and intermediary trustworthiness; and (3) examine how key predictors (such as modality, reputation, source attribution, and prior experience) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The effects of explanations on automation bias.Mor Vered, Tali Livni, Piers Douglas Lionel Howe, Tim Miller & Liz Sonenberg - 2023 - Artificial Intelligence 322 (C):103952.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making.Gabi Schaap, Tibor Bosse & Paul Hendriks Vettehen - forthcoming - AI and Society:1-14.
    While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on ‘algorithmic aversion’ in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across three high-powered (Ntotal = (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Crossing the street in front of an autonomous vehicle: An investigation of eye contact between drivengers and vulnerable road users.Aïsha Sahaï, Elodie Labeye, Loïc Caroux & Céline Lemercier - 2022 - Frontiers in Psychology 13:981666.
    Communication between road users is a major key to coordinate movement and increase roadway safety. The aim of this work was to grasp how pedestrians (Experiment A), cyclists (Experiment B), and kick scooter users (Experiment C) sought to visually communicate with drivengers when they would face autonomous vehicles (AVs). In each experiment, participants (n= 462,n= 279, andn= 202, respectively) were asked to imagine themselves in described situations of encounters between a specific type of vulnerable road user (e.g., pedestrian) and a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Technology and moral change: the transformation of truth and trust.Henrik Skaug Sætra & John Danaher - 2022 - Ethics and Information Technology 24 (3):1-16.
    Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Trust and ethics in AI.Hyesun Choung, Prabu David & Arun Ross - 2023 - AI and Society 38 (2):733-745.
    With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “I Think You Are Trustworthy, Need I Say More?” The Factor Structure and Practicalities of Trustworthiness Assessment.Michael A. Lee, Gene M. Alarcon & August Capiola - 2022 - Frontiers in Psychology 13.
    Two popular models of trustworthiness have garnered support over the years. One has postulated three aspects of trustworthiness as state-based antecedents to trust. Another has been interpreted to comprise two aspects of trustworthiness. Empirical data shows support for both models, and debate remains as to the theoretical and practical reasons researchers may adopt one model over the other. The present research aimed to consider this debate by investigating the factor structure of trustworthiness. Taking items from two scales commonly employed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and Ethics When Human Beings Collaborate With AI Agents.José J. Cañas - 2022 - Frontiers in Psychology 13.
    The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A sociotechnical perspective for the future of AI: narratives, inequalities, and human control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1):1-11.
    Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • How can we know a self-driving car is safe?Jack Stilgoe - 2021 - Ethics and Information Technology 23 (4):635-647.
    Self-driving cars promise solutions to some of the hazards of human driving but there are important questions about the safety of these new technologies. This paper takes a qualitative social science approach to the question ‘how safe is safe enough?’ Drawing on 50 interviews with people developing and researching self-driving cars, I describe two dominant narratives of safety. The first, safety-in-numbers, sees safety as a self-evident property of the technology and offers metrics in an attempt to reassure the public. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online.Atte Oksanen, Nina Savela, Rita Latikka & Aki Koivula - 2020 - Frontiers in Psychology 11.
    Robotization and artificial intelligence are expected to change societies profoundly. Trust is an important factor of human–technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that were described as either AI or robots. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An options-based solution to the sequential auction problem.Adam I. Juda & David C. Parkes - 2009 - Artificial Intelligence 173 (7-8):876-899.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethical challenges in argumentation and dialogue in a healthcare context.Mark Snaith, Rasmus Øjvind Nielsen, Sita Ramchandra Kotnis & Alison Pease - forthcoming - Argument and Computation:1-16.
    As the average age of the population increases, so too do the number of people living with chronic illnesses. With limited resources available, the development of dialogue-based e-health systems that provide justified general health advice offers a cost-effective solution to the management of chronic conditions. It is however imperative that such systems are responsible in their approach. We present in this paper two main challenges for the deployment of e-health systems, that have a particular relevance to dialogue and argumentation: collecting (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Looking for Age Differences in Self-Driving Vehicles: Examining the Effects of Automation Reliability, Driving Risk, and Physical Impairment on Trust.Ericka Rovira, Anne Collins McLaughlin, Richard Pak & Luke High - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Expertise, Automation and Trust in X-Ray Screening of Cabin Baggage.Alain Chavaillaz, Adrian Schwaninger, Stefan Michel & Juergen Sauer - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Communicating Intent of Automated Vehicles to Pedestrians.Azra Habibovic, Victor Malmsten Lundgren, Jonas Andersson, Maria Klingegård, Tobias Lagström, Anna Sirkka, Johan Fagerlönn, Claes Edgren, Rikard Fredriksson, Stas Krupenia, Dennis Saluäär & Pontus Larsson - 2018 - Frontiers in Psychology 9:284756.
    While traffic signals, signs, and road markings provide explicit guidelines for those operating in and around the roadways, some decisions, such as determinations of “who will go first,” are made by implicit negotiations between road users. In such situations, pedestrians are today often dependent on cues in drivers’ behavior such as eye contact, postures, and gestures. With the introduction of more automated functions and the transfer of control from the driver to the vehicle, pedestrians cannot rely on such non-verbal cues (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development.Shanee Honig & Tal Oron-Gilad - 2018 - Frontiers in Psychology 9:351644.
    While substantial effort has been invested in making robots more reliable, experience demonstrates that robots operating in unstructured environments are often challenged by frequent failures. Despite this, robots have not yet reached a level of design that allows effective management of faulty or unexpected behavior by untrained users. To understand why this may be the case, an in-depth literature review was done to explore when people perceive and resolve robot failures, how robots communicate failure, how failures influence people's perceptions and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Psychological Effects of the Allocation Process in Human–Robot Interaction – A Model for Research on ad hoc Task Allocation.Alina Tausch, Annette Kluge & Lars Adolph - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance.Davide Gentile, Birsen Donmez & Greg A. Jamieson - 2023 - Artificial Intelligence 321 (C):103945.
    Download  
     
    Export citation  
     
    Bookmark  
  • More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety.Linda Miller, Johannes Kraus, Franziska Babel & Martin Baumann - 2021 - Frontiers in Psychology 12:592711.
    With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Techno-Wantons: Adaptive Technology and the Will of Tomorrow.Ben White - forthcoming - Topoi:1-13.
    Recent work within the tradition of 4E cognitive science and philosophy of mind has drawn attention to the ways that our technological, material, and social environments can act as hostile, oppressive, and harmful scaffolding. These accounts push back against a perceived optimistic bias in the wider literature, whereby, according to the critics, our engagements with technology are painted as taking place on our terms, to our benefit, in ways uncomplicated by political realities. This article enters into that conversation, and aims (...)
    Download  
     
    Export citation  
     
    Bookmark