Switch to: References

Add citations

You must login to add citations.
  1. A growth mindset about human minds promotes positive responses to intelligent technology.Jianning Dang & Li Liu - 2022 - Cognition 220 (C):104985.
    Download  
     
    Export citation  
     
    Bookmark  
  • Why don’t synaesthetic colours adapt away?Dave Ward - 2012 - Philosophical Studies 159 (1):123-138.
    Synaesthetes persistently perceive certain stimuli as systematically accompanied by illusory colours, even though they know those colours to be illusory. This appears to contrast with cases where a subject’s colour vision adapts to systematic distortions caused by wearing coloured goggles. Given that each case involves longstanding systematic distortion of colour perception that the subjects recognize as such, how can a theory of colour perception explain the fact that perceptual adaptation occurs in one case but not the other? I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Can nurses in clinical practice ascribe responsibility to intelligent robots?Jerick Tabudlo, Letty Kuan & Paul Froilan Garma - 2022 - Nursing Ethics 29 (6):1457-1465.
    Background The twenty first- century marked the exponential growth in the use of intelligent robots and artificial intelligent in nursing compared to the previous decades. To the best of our knowledge, this article is first in responding to question, “Can nurses in clinical practice ascribe responsibility to intelligent robots and artificial intelligence when they commit errors?”. Purpose The objective of this article is to present two worldviews (anthropocentrism and biocentrism) in responding to the question at hand chosen based on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral permissibility of robot apologies.Makoto Kureha - forthcoming - AI and Society:1-11.
    Robots that incorporate the function of apologizing have emerged in recent years. This paper examines the moral permissibility of making robots apologize. First, I characterize the nature of apology based on analyses conducted in multiple scholarly domains. Next, I present a prima facie argument that robot apologies are not permissible because they may harm human societies by inducing the misattribution of responsibility. Subsequently, I respond to a possible response to the prima facie objection based on the interpretation that attributing responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting autonomous vehicles as moral agents improves related policy support.Kristin F. Hurst & Nicole D. Sintov - 2022 - Frontiers in Psychology 13.
    Compared to human-operated vehicles, autonomous vehicles offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment, the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology integrity predicts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robot Authority in Human-Robot Teaming: Effects of Human-Likeness and Physical Embodiment on Compliance.Kerstin S. Haring, Kelly M. Satterfield, Chad C. Tossell, Ewart J. de Visser, Joseph R. Lyons, Vincent F. Mancuso, Victor S. Finomore & Gregory J. Funke - 2021 - Frontiers in Psychology 12.
    The anticipated social capabilities of robots may allow them to serve in authority roles as part of human-machine teams. To date, it is unclear if, and to what extent, human team members will comply with requests from their robotic teammates, and how such compliance compares to requests from human teammates. This research examined how the human-likeness and physical embodiment of a robot affect compliance to a robot's request to perseverate utilizing a novel task paradigm. Across a set of two studies, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Gender Categories as Dual‐Character Concepts?Cai Guo, Carol S. Dweck & Ellen M. Markman - 2021 - Cognitive Science 45 (5):e12954.
    Seminal work by Knobe, Prasada, and Newman (2013) distinguished a set of concepts, which they named “dual‐character concepts.” Unlike traditional concepts, they require two distinct criteria for determining category membership. For example, the prototypical dual‐character concept “artist” has both a concrete dimension of artistic skills, and an abstract dimension of aesthetic sensibility and values. Therefore, someone can be a good artist on the concrete dimension but not truly an artist on the abstract dimension. Does this analysis capture people's understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • When does “no” mean no? Insights from sex robots.Anastasiia D. Grigoreva, Joshua Rottman & Arber Tasimi - 2024 - Cognition 244 (C):105687.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • People treat social robots as real social agents.Alexander Eng, Yam Kai Chi & Kurt Gray - 2023 - Behavioral and Brain Sciences 46:e28.
    When people interact with social robots, they treat them as real social agents. How people depict robots is fun to consider, but when people are confronted with embodied entities that move and talk – whether humans or robots – they interact with them as authentic social agents with minds, and not as mere representations.
    Download  
     
    Export citation  
     
    Bookmark