Switch to: References

Add citations

You must login to add citations.
  1. Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs.Laura Moradbakhti, Simon Schreibelmayr & Martina Mara - 2022 - Frontiers in Psychology 13.
    Artificial Intelligence is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs, namely autonomy, competence, and relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Patient Preferences Concerning Humanoid Features in Healthcare Robots.Dane Leigh Gogoshin - 2024 - Science and Engineering Ethics 30 (6):1-16.
    In this paper, I argue that patient preferences concerning human physical attributes associated with race, culture, and gender should be excluded from public healthcare robot design. On one hand, healthcare should be (objective, universal) needs oriented. On the other hand, patient well-being (the aim of healthcare) is, in concrete ways, tied to preferences, as is patient satisfaction (a core WHO value). The shift toward patient-centered healthcare places patient preferences into the spotlight. Accordingly, the design of healthcare technology cannot simply disregard (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automation, Alignment, and the Cooperative Interface.Julian David Jonker - 2024 - The Journal of Ethics 28 (3):483-504.
    The paper demonstrates that social alignment is distinct from value alignment as it is currently understood in the AI safety literature, and argues that social alignment is an important research agenda. Work provides an important example for the argument, since work is a cooperative endeavor, and it is part of the larger manifold of social cooperation. These cooperative aspects of work are individually and socially valuable, and so they must be given a central place when evaluating the impact of AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Comprehension and engagement in survey interviews with virtual agents.Frederick G. Conrad, Michael F. Schober, Matt Jans, Rachel A. Orlowski, Daniel Nielsen & Rachel Levenstein - 2015 - Frontiers in Psychology 6.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Gender and Age Stereotypes in Robotics for Eldercare: Ethical Implications of Stakeholder Perspectives from Technology Development, Industry, and Nursing.Merle Weßel, Niklas Ellerich-Groppe, Frauke Koppelin & Mark Schweda - 2022 - Science and Engineering Ethics 28 (4):1-15.
    Social categorizations regarding gender or age have proven to be relevant in human-robot interaction. Their stereotypical application in the development and implementation of robotics in eldercare is even discussed as a strategy to enhance the acceptance, well-being, and quality of life of older people. This raises serious ethical concerns, e.g., regarding autonomy of and discrimination against users. In this paper, we examine how relevant professional stakeholders perceive and evaluate the use of social categorizations and stereotypes regarding gender and age in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Gender Bias and Conversational Agents: an ethical perspective on Social Robotics.Fabio Fossa & Irene Sucameli - 2022 - Science and Engineering Ethics 28 (3):1-23.
    The increase in the spread of conversational agents urgently requires to tackle the ethical issues linked to their design. In fact, developers frequently include in their products cues that trigger social biases in order to maximize the performance and the quality of human-machine interactions. The present paper discusses whether and to what extent it is ethically sound to intentionally trigger gender biases through the design of virtually embodied conversational agents. After outlining the complex dynamics involving social biases, social robots, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Gender preferences for robots and gender equality orientation in communication situations.Tomohiro Suzuki & Tatsuya Nomura - forthcoming - AI and Society:1-10.
    The individual physical appearances of robots are considered significant, similar to the way that those of humans are. We investigated whether users prefer robots with male or female physical appearances for use in daily communication situations and whether egalitarian gender role attitudes are related to this preference. One thousand adult men and women aged 20–60 participated in the questionnaire survey. The results of our study showed that in most situations and for most subjects, “males” was not selected and “females” or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A possibility of inappropriate use of gender studies in human-robot Interaction.Tatsuya Nomura - 2020 - AI and Society 35 (3):751-754.
    Download  
     
    Export citation  
     
    Bookmark  
  • Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot.Darci Gallimore, Joseph B. Lyons, Thy Vo, Sean Mahoney & Kevin T. Wynne - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Toward the search for the perfect blade runner: a large-scale, international assessment of a test that screens for “humanness sensitivity”.Robert Epstein, Maria Bordyug, Ya-Han Chen, Yijing Chen, Anna Ginther, Gina Kirkish & Holly Stead - 2023 - AI and Society 38 (4):1543-1563.
    We introduce a construct called “humanness sensitivity,” which we define as the ability to recognize uniquely human characteristics. To evaluate the construct, we used a “concurrent study design” to conduct an internet-based study with a convenience sample of 42,063 people from 88 countries (52.4% from the U.S. and Canada).We sought to determine to what extent people could identify subtle characteristics of human behavior, thinking, emotions, and social relationships which currently distinguish humans from non-human entities such as bots. Many people were (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The quest for appropriate models of human-likeness: anthropomorphism in media equation research.Nils Klowait - 2018 - AI and Society 33 (4):527-536.
    Nass’ and Reeves’ media equation paradigm within human–computer interaction challenges long-held assumptions about how users approach computers. Given a rudimentary set of cues present in the system’s design, users are said to unconsciously treat computers as genuine interactants—extending rules of politeness, biases and human interactive conventions to machines. Since the results have wide-ranging implications for HCI research methods, interface design and user experiences, researchers are hard-pressed to experimentally verify the paradigm. This paper focuses on the methodology of attributing the necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Ethics of Terminology: Can We Use Human Terms to Describe AI?Ophelia Deroy - 2023 - Topoi 42 (3):881-889.
    Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The effect of service robot occupational gender stereotypes on customers' willingness to use them.Qian Hu, Xingguang Pan, Jia Luo & Yiduo Yu - 2022 - Frontiers in Psychology 13.
    Customers have obvious occupational gender stereotypes for service employees. In recent years, intelligent service robots have been widely used in the hospitality industry and have also been given gender characteristics to attract customers to use them. However, whether and when the usage of gendered service robots is effective remains to be explored. This research focuses on customers' occupational gender stereotypes and the gender of service robots, examining the influences of their consistency on customers' willingness to use service robots through three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do People Regard Robots as Human-Like Social Partners? Evidence From Perspective-Taking in Spatial Descriptions.Chengli Xiao, Liufei Xu, Yuqing Sui & Renlai Zhou - 2021 - Frontiers in Psychology 11.
    Spatial communications are essential to the survival and social interaction of human beings. In science fiction and the near future, robots are supposed to be able to understand spatial languages to collaborate and cooperate with humans. However, it remains unknown whether human speakers regard robots as human-like social partners. In this study, human speakers describe target locations to an imaginary human or robot addressee under various scenarios varying in relative speaker–addressee cognitive burden. Speakers made equivalent perspective choices to human and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation