Switch to: References

Add citations

You must login to add citations.
  1. Personal AI, deception, and the problem of emotional bubbles.Philip Maxwell Thingbø Mlonyeni - forthcoming - AI and Society:1-12.
    Personal AI is a new type of AI companion, distinct from the prevailing forms of AI companionship. Instead of playing a narrow and well-defined social role, like friend, lover, caretaker, or colleague, with a set of pre-determined responses and behaviors, Personal AI is engineered to tailor itself to the user, including learning to mirror the user’s unique emotional language and attitudes. This paper identifies two issues with Personal AI. First, like other AI companions, it is deceptive about the presence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Relating Mori’s Uncanny Valley in generating conversations with artificial affective communication and natural language processing.Feni Betriana, Kyoko Osaka, Kazuyuki Matsumoto, Tetsuya Tanioka & Rozzano C. Locsin - 2021 - Nursing Philosophy 22 (2):e12322.
    Human beings express affinity (Shinwa‐kan in Japanese language) in communicating transactive engagements among healthcare providers, patients and healthcare robots. The appearance of healthcare robots and their language capabilities often feature characteristic and appropriate compassionate dialogical functions in human–robot interactions. Elements of healthcare robot configurations comprising its physiognomy and communication properties are founded on the positivist philosophical perspective of being the summation of composite parts, thereby mimicking human persons. This article reviews Mori's theory of the Uncanny Valley and its consequent debates, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sex Robots and Views from Nowhere: A Commentary on Jecker, Howard and Sparrow, and Wang.Kelly Kate Evans - 2021 - In Ruiping Fan & Mark J. Cherry (eds.), Sex Robots: Social Impact and the Future of Human Relations. Springer.
    This article explores the implications of what it means to moralize about future technological innovations. Specifically, I have been invited to comment on three papers that attempt to think about what seems to be an impending social reality: the availability of life-like sex robots. In response, I explore what it means to moralize about future technological innovations from a secular perspective, i.e., a perspective grounded in an immanent, socio-historically contingent view. I review the arguments of Nancy Jecker, Mark Howard and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships.Helen Ryland - 2021 - Minds and Machines 31 (3):377-393.
    This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What Does It Mean to Empathise with a Robot?Joanna K. Malinowska - 2021 - Minds and Machines 31 (3):361-376.
    Given that empathy allows people to form and maintain satisfying social relationships with other subjects, it is no surprise that this is one of the most studied phenomena in the area of human–robot interaction (HRI). But the fact that the term ‘empathy’ has strong social connotations raises a question: can it be applied to robots? Can we actually use social terms and explanations in relation to these inanimate machines? In this article, I analyse the range of uses of the term (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Vulnerability under the gaze of robots: relations among humans and robots.Nicola Liberati & Shoji Nagataki - 2019 - AI and Society 34 (2):333-342.
    The problem of artificial intelligence and human being has always raised questions about possible interactions among them and possible effects yielded by the introduction of such un-human subject. Dreyfus deeply connects intelligence and body based on a phenomenological viewpoint. Thanks to his reading of Merleau-Ponty, he clearly stated that an intelligence must be embodied into a body to function. According to his suggestion, any AI designed to be human-like is doom to failure if there is no tight bound with a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Systems in Society and War : Philosophical Inquiries.Linda Johansson - 2013 - Dissertation, Royal Institute of Technology, Stockholm
    The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots.Joanna Karolina Malinowska - 2022 - International Journal of Social Robotics 14 (2):341–355.
    This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of (...)
    Download  
     
    Export citation  
     
    Bookmark