Switch to: References

Add citations

You must login to add citations.
  1. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trait attribution explains human–robot interactions.Yochanan E. Bigman, Nicholas Surdel & Melissa J. Ferguson - 2023 - Behavioral and Brain Sciences 46:e23.
    Clark and Fischer (C&F) claim that trait attribution has major limitations in explaining human–robot interactions. We argue that the trait attribution approach can explain the three issues posited by C&F. We also argue that the trait attribution approach is parsimonious, as it assumes that the same mechanisms of social cognition apply to human–robot interaction.
    Download  
     
    Export citation  
     
    Bookmark  
  • Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures.Maude Lavanchy, Patrick Reichert, Jayanth Narayanan & Krishna Savani - forthcoming - Journal of Business Ethics.
    Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The Role of Decision Authority and Stated Social Intent as Predictors of Trust in Autonomous Robots.Joseph B. Lyons, Sarah A. Jessup & Thy Q. Vo - 2024 - Topics in Cognitive Science 16 (3):430-449.
    Prior research has demonstrated that trust in robots and performance of robots are two important factors that influence human–autonomy teaming. However, other factors may influence users’ perceptions and use of autonomous systems, such as perceived intent of robots and decision authority of the robots. The current study experimentally examined participants’ trust in an autonomous security robot (ASR), perceived trustworthiness of the ASR, and desire to use an ASR that varied in levels of decision authority and benevolence. Participants (N = 340) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction.Abdulaziz Abubshait & Eva Wiese - 2017 - Frontiers in Psychology 8:277299.
    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have towards others, and deter-mines the degree of empathy, prosociality and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to nonhuman agents like robots, as long (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Intuitive And Reflective Responses In Philosophy.Nick Byrd - 2014 - Dissertation, University of Colorado
    Cognitive scientists have revealed systematic errors in human reasoning. There is disagreement about what these errors indicate about human rationality, but one upshot seems clear: human reasoning does not seem to fit traditional views of human rationality. This concern about rationality has made its way through various fields and has recently caught the attention of philosophers. The concern is that if philosophers are prone to systematic errors in reasoning, then the integrity of philosophy would be threatened. In this paper, I (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A growth mindset about human minds promotes positive responses to intelligent technology.Jianning Dang & Li Liu - 2022 - Cognition 220 (C):104985.
    Download  
     
    Export citation  
     
    Bookmark  
  • Psychological consequences of legal responsibility misattribution associated with automated vehicles.Peng Liu, Manqing Du & Tingting Li - 2021 - Ethics and Information Technology 23 (4):763-776.
    A human driver and an automated driving system might share control of automated vehicles in the near future. This raises many concerns associated with the assignment of responsibility for negative outcomes caused by them; one is that the human driver might be required to bear the brunt of moral and legal responsibilities. The psychological consequences of responsibility misattribution have not yet been examined. We designed a hypothetical crash similar to Uber’s 2018 fatal crash. We incorporated five legal responsibility attributions. Participants (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.Sam Thellman, Annika Silvervarg & Tom Ziemke - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • People are averse to machines making moral decisions.Yochanan E. Bigman & Kurt Gray - 2018 - Cognition 181 (C):21-34.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • COVID-19, Coronavirus, Wuhan Virus, or China Virus? Understanding How to “Do No Harm” When Naming an Infectious Disease.Theodore C. Masters-Waage, Nilotpal Jha & Jochen Reb - 2020 - Frontiers in Psychology 11.
    When labeling an infectious disease, officially sanctioned scientific names, e.g., “H1N1 virus,” are recommended over place-specific names, e.g., “Spanish flu.” This is due to concerns from policymakers and the WHO that the latter might lead to unintended stigmatization. However, with little empirical support for such negative consequences, authorities might be focusing on limited resources on an overstated issue. This paper empirically investigates the impact of naming against the current backdrop of the 2019–2020 pandemic. The first hypothesis posited that using place-specific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting autonomous vehicles as moral agents improves related policy support.Kristin F. Hurst & Nicole D. Sintov - 2022 - Frontiers in Psychology 13.
    Compared to human-operated vehicles, autonomous vehicles offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment, the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology integrity predicts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A study on psychological determinants of users' autonomous vehicles adoption from anthropomorphism and UTAUT perspectives.Yuqi Tian & Xiaowen Wang - 2022 - Frontiers in Psychology 13.
    As the autonomous vehicles technology gradually enters the public eye, understanding consumers' psychological motivations for accepting autonomous vehicles is critical for the development of autonomous vehicles and society. Previously, researchers have explored the determinants of fully autonomous vehicles but the relevant research is far from enough. Moreover, the relationship between anthropomorphism and users' behavior has been ignored to a large extent. Therefore, this study aim to fill the gap by using anthropomorphism and the unified theory of acceptance and use of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Multi-device trust transfer: Can trust be transferred among multiple devices?Kohei Okuoka, Kouichi Enami, Mitsuhiko Kimoto & Michita Imai - 2022 - Frontiers in Psychology 13.
    Recent advances in automation technology have increased the opportunity for collaboration between humans and multiple autonomous systems such as robots and self-driving cars. In research on autonomous system collaboration, the trust users have in autonomous systems is an important topic. Previous research suggests that the trust built by observing a task can be transferred to other tasks. However, such research did not focus on trust in multiple different devices but in one device or several of the same devices. Thus, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • In the name of science: animal appellations and best practice.Jessica du Toit - 2020 - Journal of Medical Ethics 46 (12):840-843.
    BackgroundThe practice of giving animal research subjects proper names is frowned on by the academic scientific community. While researchers provide a number of reasons for desisting from giving their animal subjects proper names, the most common are that naming leads to anthropomorphising which, in turn, leads to data and results that are unobjective and invalid; and while naming does not necessarily entail some mistake on the researcher’s part, some feature of the research enterprise renders the practice impossible or ill-advised.ObjectivesMy aim (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do People Regard Robots as Human-Like Social Partners? Evidence From Perspective-Taking in Spatial Descriptions.Chengli Xiao, Liufei Xu, Yuqing Sui & Renlai Zhou - 2021 - Frontiers in Psychology 11.
    Spatial communications are essential to the survival and social interaction of human beings. In science fiction and the near future, robots are supposed to be able to understand spatial languages to collaborate and cooperate with humans. However, it remains unknown whether human speakers regard robots as human-like social partners. In this study, human speakers describe target locations to an imaginary human or robot addressee under various scenarios varying in relative speaker–addressee cognitive burden. Speakers made equivalent perspective choices to human and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust, risk perception, and intention to use autonomous vehicles: an interdisciplinary bibliometric review.Mohammad Naiseh, Jediah Clark, Tugra Akarsu, Yaniv Hanoch, Mario Brito, Mike Wald, Thomas Webster & Paurav Shukla - forthcoming - AI and Society:1-21.
    Autonomous vehicles (AV) offer promising benefits to society in terms of safety, environmental impact and increased mobility. However, acute challenges persist with any novel technology, inlcuding the perceived risks and trust underlying public acceptance. While research examining the current state of AV public perceptions and future challenges related to both societal and individual barriers to trust and risk perceptions is emerging, it is highly fragmented across disciplines. To address this research gap, by using the Web of Science database, our study (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Corporate insecthood.Nina Strohminger & Matthew R. Jordan - 2022 - Cognition 224 (C):105068.
    Download  
     
    Export citation  
     
    Bookmark