Switch to: References

Add citations

You must login to add citations.
  1. Undisruptable or stable concepts: can we design concepts that can avoid conceptual disruption, normative critique, and counterexamples?Björn Lundgren - 2024 - Ethics and Information Technology 26 (2):1-11.
    It has been argued that our concepts can be disrupted or challenged by technology or normative concerns, which raises the question of whether we can create, design, engineer, or define more robust concepts that avoid counterexamples and conceptual challenges that can lead to conceptual disruption. In this paper, it is argued that we can. This argument is presented through a case study of a definition in the technological domain.
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • The Technological Future of Love.Sven Nyholm, John Danaher & Brian D. Earp - 2022 - In André Grahle, Natasha McKeever & Joe Saunders (eds.), Philosophy of Love in the Past, Present, and Future. Routledge. pp. 224-239.
    How might emerging and future technologies—sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and ‘gamify’ romantic relationships—change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for “cautious optimism” about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology.
    Download  
     
    Export citation  
     
    Bookmark  
  • Social robots and digital well-being: how to design future artificial agents.Matthew J. Dennis - 2021 - Mind and Society 21 (1):37-50.
    Value-sensitive design theorists propose that a range of values that should inform how future social robots are engineered. This article explores a new value: digital well-being, and proposes that the next generation of social robots should be designed to facilitate this value in those who use or come into contact with these machines. To do this, I explore how the morphology of social robots is closely connected to digital well-being. I argue that a key decision is whether social robots are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Sex Robots and Views from Nowhere: A Commentary on Jecker, Howard and Sparrow, and Wang.Kelly Kate Evans - 2021 - In Ruiping Fan & Mark J. Cherry (eds.), Sex Robots: Social Impact and the Future of Human Relations. Springer.
    This article explores the implications of what it means to moralize about future technological innovations. Specifically, I have been invited to comment on three papers that attempt to think about what seems to be an impending social reality: the availability of life-like sex robots. In response, I explore what it means to moralize about future technological innovations from a secular perspective, i.e., a perspective grounded in an immanent, socio-historically contingent view. I review the arguments of Nancy Jecker, Mark Howard and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships.Helen Ryland - 2021 - Minds and Machines 31 (3):377-393.
    This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Meaning in Life in AI Ethics—Some Trends and Perspectives.Sven Nyholm & Markus Rüther - 2023 - Philosophy and Technology 36 (2):1-24.
    In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Should criminal law protect love relation with robots?Kamil Mamak - forthcoming - AI and Society:1-10.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Military robots should not look like a humans.Kamil Mamak & Kaja Kowalczewska - 2023 - Ethics and Information Technology 25 (3):1-10.
    Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Robotic Touch: Why there is no good reason to prefer human nurses to carebots.Karen Lancaster - 2019 - Philosophy in the Contemporary World 25 (2):88-109.
    An elderly patient in a care home only wants human nurses to provide her care – not robots. If she selected her carers based on skin colour, it would be seen as racist and morally objectionable, but is choosing a human nurse instead of a robot also morally objectionable and speciesist? A plausible response is that it is not, because humans provide a better standard of care than robots do, making such a choice justifiable. In this paper, I show why (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Introduction to the Special Issue on “Artificial Speakers - Philosophical Questions and Implications”.Hendrik Kempt, Jacqueline Bellon & Sebastian Nähr-Wagener - 2021 - Minds and Machines 31 (4):465-470.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sociable Robots for Later Life: Carebots, Friendbots and Sexbots.Nancy S. Jecker - 2021 - In Ruiping Fan & Mark J. Cherry (eds.), Sex Robots: Social Impact and the Future of Human Relations. Springer. pp. 25-40.
    This chapter discusses three types of sociable robots for older adults: robotic caregivers ; robotic friends ; and sex robots. The central argument holds that society ought to make reasonable efforts to provide these types of robots and that under certain conditions, omitting such support not only harms older adults but poses threats to their dignity. The argument proceeds stepwise. First, the chapter establishes that assisting care-dependent older adults to perform activities of daily living is integral to respecting dignity. Here, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Standing of Social Robots: Untapped Insights from Africa.Nancy S. Jecker, Caesar A. Atiure & Martin Odei Ajei - 2022 - Philosophy and Technology 35 (2):1-22.
    This paper presents an African relational view of social robots’ moral standing which draws on the philosophy of ubuntu. The introduction places the question of moral standing in historical and cultural contexts. Section 2 demonstrates an ubuntu framework by applying it to the fictional case of a social robot named Klara, taken from Ishiguro’s novel, Klara and the Sun. We argue that an ubuntu ethic assigns moral standing to Klara, based on her relational qualities and pro-social virtues. Section 3 introduces (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Friendship, markets, and companionate robots for children.Mary Healy - 2023 - Journal of Philosophy of Education 57 (3):661-677.
    The aim of this article is to examine how markets enable companionship to be disconnected from the concept of friendship thus enabling an illusion of companionship without the demands of friendship. As friendship is a crucial early relationship for children, this is particularly germane to the world of education. It recognizes the previous lack of philosophical attention to the idea of companionship—a key factor in friendship—and that this omission contributes to a lack of clarity on a variety of issues. Starting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Future of Value Sensitive Design.Batya Friedman, David Hendry, Steven Umbrello, Jeroen Van Den Hoven & Daisy Yoo - 2020 - Paradigm Shifts in ICT Ethics: Proceedings of the 18th International Conference ETHICOMP 2020.
    In this panel, we explore the future of value sensitive design (VSD). The stakes are high. Many in public and private sectors and in civil society are gradually realizing that taking our values seriously implies that we have to ensure that values effectively inform the design of technology which, in turn, shapes people’s lives. Value sensitive design offers a highly developed set of theory, tools, and methods to systematically do so.
    Download  
     
    Export citation  
     
    Bookmark