Switch to: References

Add citations

You must login to add citations.
  1. People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.Simon Myers & Jim A. C. Everett - 2025 - Cognition 256 (C):106028.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and dehumanization: a definition and avoidance model.Mario D. Schultz, Melanie Clegg, Reto Hofstetter & Peter Seele - forthcoming - AI and Society:1-21.
    Dehumanization by algorithms raises important issues for business and society. Yet, these issues remain poorly understood due to the fragmented nature of the evolving dehumanization literature across disciplines, originating from colonialism, industrialization, post-colonialism studies, contemporary ethics, and technology studies. This article systematically reviews the literature on algorithms and dehumanization (n = 180 articles) and maps existing knowledge across several clusters that reveal its underlying characteristics. Based on the review, we find that algorithmic dehumanization is particularly problematic for human resource management (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do we want AI judges? The acceptance of AI judges’ judicial decision-making on moral foundations.Taenyun Kim & Wei Peng - forthcoming - AI and Society:1-14.
    This study explored the acceptance of artificial intelligence-based judicial decision-making (AI-JDM) as compared to human judges, focusing on the moral foundations of the cases involved using within-subject experiments. The study found a general aversion toward AI-JDM regarding perceived risk, permissibility, and social approval. However, when cases are rooted in the moral foundation of fairness, AI-JDM receives slightly higher social approval, though the effect size remains small. The study also found that demographic factors like racial/ethnic status and age significantly affect these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures.Maude Lavanchy, Patrick Reichert, Jayanth Narayanan & Krishna Savani - forthcoming - Journal of Business Ethics.
    Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Biased Humans, (Un)Biased Algorithms?Florian Pethig & Julia Kroenung - 2022 - Journal of Business Ethics 183 (3):637-652.
    Previous research has shown that algorithmic decisions can reflect gender bias. The increasingly widespread utilization of algorithms in critical decision-making domains (e.g., healthcare or hiring) can thus lead to broad and structural disadvantages for women. However, women often experience bias and discrimination through human decisions and may turn to algorithms in the hope of receiving neutral and objective evaluations. Across three studies (N = 1107), we examine whether women’s receptivity to algorithms is affected by situations in which they believe that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • People Prefer Moral Discretion to Algorithms: Algorithm Aversion Beyond Intransparency.Johanna Jauernig, Matthias Uhl & Gari Walkowitz - 2022 - Philosophy and Technology 35 (1):1-25.
    We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • It cannot be right if it was written by AI: on lawyers’ preferences of documents perceived as authored by an LLM vs a human.Jakub Harasta, Tereza Novotná & Jaromir Savelka - forthcoming - Artificial Intelligence and Law:1-38.
    Large Language Models (LLMs) enable a future in which certain types of legal documents may be generated automatically. This has a great potential to streamline legal processes, lower the cost of legal services, and dramatically increase access to justice. While many researchers focus on proposing and evaluating LLM-based applications supporting tasks in the legal domain, there is a notable lack of investigations into how legal professionals perceive content if they believe an LLM has generated it. Yet, this is a critical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting autonomous vehicles as moral agents improves related policy support.Kristin F. Hurst & Nicole D. Sintov - 2022 - Frontiers in Psychology 13.
    Compared to human-operated vehicles, autonomous vehicles offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment, the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology integrity predicts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A growth mindset about human minds promotes positive responses to intelligent technology.Jianning Dang & Li Liu - 2022 - Cognition 220 (C):104985.
    Download  
     
    Export citation  
     
    Bookmark  
  • Negative performance feedback from algorithms or humans? effect of medical researchers’ algorithm aversion on scientific misconduct.Ganli Liao, Feiwen Wang, Wenhui Zhu & Qichao Zhang - 2024 - BMC Medical Ethics 25 (1):1-20.
    Institutions are increasingly employing algorithms to provide performance feedback to individuals by tracking productivity, conducting performance appraisals, and developing improvement plans, compared to traditional human managers. However, this shift has provoked considerable debate over the effectiveness and fairness of algorithmic feedback. This study investigates the effects of negative performance feedback (NPF) on the attitudes, cognition and behavior of medical researchers, comparing NPF from algorithms versus humans. Two scenario-based experimental studies were conducted with a total sample of 660 medical researchers (algorithm (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • People treat social robots as real social agents.Alexander Eng, Yam Kai Chi & Kurt Gray - 2023 - Behavioral and Brain Sciences 46:e28.
    When people interact with social robots, they treat them as real social agents. How people depict robots is fun to consider, but when people are confronted with embodied entities that move and talk – whether humans or robots – they interact with them as authentic social agents with minds, and not as mere representations.
    Download  
     
    Export citation  
     
    Bookmark  
  • Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - 2024 - AI and Society 39 (6):2677-2689.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should AI allocate livers for transplant? Public attitudes and ethical considerations.Max Drezga-Kleiminger, Joanna Demaree-Cotton, Julian Koplin, Julian Savulescu & Dominic Wilkinson - 2023 - BMC Medical Ethics 24 (1):1-11.
    Background: Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. Methods: We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Adoption of AI-Enabled Tools in Social Development Organizations in India: An Extension of UTAUT Model.Ruchika Jain, Naval Garg & Shikha N. Khera - 2022 - Frontiers in Psychology 13.
    Social development organizations increasingly employ artificial intelligence -enabled tools to help team members collaborate effectively and efficiently. These tools are used in various team management tasks and activities. Based on the unified theory of acceptance and use of technology, this study explores various factors influencing employees’ use of AI-enabled tools. The study extends the model in two ways: a) by evaluating the impact of these tools on the employees’ collaboration and b) by exploring the moderating role of AI aversion. Data (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making.Gabi Schaap, Tibor Bosse & Paul Hendriks Vettehen - forthcoming - AI and Society:1-14.
    While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on ‘algorithmic aversion’ in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across three high-powered (Ntotal = (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • People's judgments of humans and robots in a classic moral dilemma.Bertram F. Malle, Matthias Scheutz, Corey Cusimano, John Voiklis, Takanori Komatsu, Stuti Thapa & Salomi Aladia - 2025 - Cognition 254 (C):105958.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machines and humans in sacrificial moral dilemmas: Required similarly but judged differently?Yueying Chu & Peng Liu - 2023 - Cognition 239 (C):105575.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trait attribution explains human–robot interactions.Yochanan E. Bigman, Nicholas Surdel & Melissa J. Ferguson - 2023 - Behavioral and Brain Sciences 46:e23.
    Clark and Fischer (C&F) claim that trait attribution has major limitations in explaining human–robot interactions. We argue that the trait attribution approach can explain the three issues posited by C&F. We also argue that the trait attribution approach is parsimonious, as it assumes that the same mechanisms of social cognition apply to human–robot interaction.
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous vehicles: How perspective-taking accessibility alters moral judgments and consumer purchasing behavior.Rose Martin, Petko Kusev & Paul van Schaik - 2021 - Cognition 212 (C):104666.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Morality on the road: Should machine drivers be more utilitarian than human drivers?Peng Liu, Yueying Chu, Siming Zhai, Tingru Zhang & Edmond Awad - 2025 - Cognition 254 (C):106011.
    Download  
     
    Export citation  
     
    Bookmark  
  • The existence of manual mode increases human blame for AI mistakes.Mads N. Arnestad, Samuel Meyers, Kurt Gray & Yochanan E. Bigman - 2024 - Cognition 252 (C):105931.
    Download  
     
    Export citation  
     
    Bookmark