Switch to: References

Citations of:

Robots should be slaves

In Yorick Wilks (ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues. John Benjamins Publishing. pp. 63-74 (2010)

Add citations

You must login to add citations.
  1. Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege.Lantz Fleming Miller - 2015 - Human Rights Review 16 (4):369-391.
    As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Designing People to Serve.Steve Petersen - 2011 - In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press.
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - forthcoming - In David J. Gunkel (ed.), Handbook of the Ethics of AI. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward Abiozoomorphism in Social Robotics? Discussion of a New Category between Mechanical Entities and Living Beings.Jaana Parviainen & Tuuli Turja - 2021 - Journal of Posthuman Studies 5 (2):150–168.
    Social robotics designed to enhance anthropomorphism and zoomorphism seeks to evoke feelings of empathy and other positive emotions in humans. While it is difficult to treat these machines as mere artefacts, the simulated lifelike qualities of robots easily lead to misunderstandings that the machines could be intentional. In this post-anthropocentrically positioned article, we look for a solution to the dilemma by developing a novel concept, “abiozoomorphism.” Drawing on Donna Haraway’s conceptualization of companion species, we address critical aspects of why robots (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ectogestative Technology and the Beginning of Life.Lily Frank, Julia Hermann, Ilona Kavege & Anna Puzio - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 113–140.
    How could ectogestative technology disrupt gender roles, parenting practices, and concepts such as ‘birth’, ‘body’, or ‘parent’? In this chapter, we situate this emerging technology in the context of the history of reproductive technologies and analyse the potential social and conceptual disruptions to which it could contribute. An ectogestative device, better known as ‘artificial womb’, enables the extra-uterine gestation of a human being, or mammal more generally. It is currently developed with the main goal of improving the survival chances of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Agency and the Game of Semantic Extension.Fossa Fabio - 2021 - Interdisciplinary Science Reviews 46 (4):440-457.
    Artificial agents are commonly described by using words that traditionally belong to the semantic field of organisms, particularly of animal and human life. I call this phenomenon the game of semantic extension. However, the semantic extension of words as crucial as “autonomous”, “intelligent”, “creative”, “moral”, and so on, is often perceived as unsatisfactory, which is signalled with the extensive use of inverted commas or other syntactical cues. Such practice, in turn, has provoked harsh criticism that usually refers back to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Robot rights in joint action.Guido Löhr - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer.
    The claim I want to explore in this paper is simple. In social ontology, Margaret Gilbert, Abe Roth, Michael Bratman, Antonie Meijers, Facundo Alonso and others talk about rights or entitlements against other participants in joint action. I employ several intuition pumps to argue that we have reason to assume that such entitlements or rights can be ascribed even to non-sentient robots that we collaborate with. Importantly, such entitlements are primarily identified in terms of our normative discourse. Justified criticism, for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social robots as partners?Paul Healy - forthcoming - AI and Society:1-8.
    Although social robots are achieving increasing prominence as companions and carers, their status as partners in an interactive relationship with humans remains unclear. The present paper explores this issue, first, by considering why social robots cannot truly qualify as “Thous”, that is, as surrogate human partners, as they are often assumed to be, and then by briefly considering why it will not do to construe them as mere machines, slaves, or pets, as others have contended. Having concluded that none of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence in fiction: between narratives and metaphors.Isabella Hermann - 2023 - AI and Society 38 (1):319-329.
    Science-fiction (SF) has become a reference point in the discourse on the ethics and risks surrounding artificial intelligence (AI). Thus, AI in SF—science-fictional AI—is considered part of a larger corpus of ‘AI narratives’ that are analysed as shaping the fears and hopes of the technology. SF, however, is not a foresight or technology assessment, but tells dramas for a human audience. To make the drama work, AI is often portrayed as human-like or autonomous, regardless of the actual technological limitations. Taking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)Theopolis Monk: Envisioning a Future of A.I. Public Service.Scott H. Hawley - 2019 - In Newton Lee (ed.), The Transhumanism Handbook. Springer Verlag. pp. 271-300.
    Visions of future applications of artificial intelligence tend to veer toward the naively optimistic or frighteningly dystopian, neglecting the numerous human factors necessarily involved in the design, deployment and oversight of such systems. The dream that AI systems may somehow replace the irregularities and struggles of human governance with unbiased efficiency is seen to be non-scientific and akin to a religious hope, whereas the current trajectory of AI development indicates that it will increasingly serve as a tool by which humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
    AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent.Lantz Fleming Miller - 2020 - Ethics and Information Technology 22 (4):297-305.
    Since the Nuremberg Code and the first Declaration of Helsinki, globally there has been increasing adoption and adherence to procedures for ensuring that human subjects in research are as well informed as possible of the study’s reasons and risks and voluntarily consent to serving as subject. To do otherwise is essentially viewed as violation of the human research subject’s legal and moral rights. However, with the recent philosophical concerns about responsible robotics, the limits and ambiguities of research-subjects ethical codes become (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Should we be thinking about sex robots?John Danaher - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press.
    The chapter introduces the edited collection Robot Sex: Social and Ethical Implications. It proposes a definition of the term 'sex robot' and examines some current prototype models. It also considers the three main ethical questions one can ask about sex robots: (i) do they benefit/harm the user? (ii) do they benefit/harm society? or (iii) do they benefit/harm the robot?
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy.S. M. Solaiman - 2017 - Artificial Intelligence and Law 25 (2):155-179.
    Robots are now associated with various aspects of our lives. These sophisticated machines have been increasingly used in different manufacturing industries and services sectors for decades. During this time, they have been a factor in causing significant harm to humans, prompting questions of liability. Industrial robots are presently regarded as products for liability purposes. In contrast, some commentators have proposed that robots be granted legal personality, with an overarching aim of exonerating the respective creators and users of these artefacts from (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?Lily Frank & Sven Nyholm - 2017 - Artificial Intelligence and Law 25 (3):305-323.
    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Makes Work “Good” in the Age of Artificial Intelligence (AI)? Islamic Perspectives on AI-Mediated Work Ethics.Mohammed Ghaly - forthcoming - The Journal of Ethics:1-25.
    Artificial intelligence (AI) technologies are increasingly creeping into the work sphere, thereby gradually questioning and/or disturbing the long-established moral concepts and norms communities have been using to define what makes work good. Each community, and Muslims make no exception in this regard, has to revisit their moral world to provide well-thought frameworks that can engage with the challenging ethical questions raised by the new phenomenon of AI-mediated work. For a systematic analysis of the broad topic of AI-mediated work ethics from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should criminal law protect love relation with robots?Kamil Mamak - forthcoming - AI and Society:1-10.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. [REVIEW]Mark Coeckelbergh - 2020 - Minds and Machines 31 (3):337-360.
    The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?Jilles Smids, Sven Nyholm & Hannah Berkers - 2020 - Philosophy and Technology 33 (3):503-522.
    The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How to Treat Machines that Might Have Minds.Nicholas Agar - 2020 - Philosophy and Technology 33 (2):269-282.
    This paper offers practical advice about how to interact with machines that we have reason to believe could have minds. I argue that we should approach these interactions by assigning credences to judgements about whether the machines in question can think. We should treat the premises of philosophical arguments about whether these machines can think as offering evidence that may increase or reduce these credences. I describe two cases in which you should refrain from doing as your favored philosophical view (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics.Fabio Fossa - 2018 - In Mark Coeckelbergh, Janina Loh, Michael Funk, Joanna Seibt & Marco Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and, Public Space. pp. 103-111.
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   58 citations