Switch to: References

Citations of:

Designing People to Serve

In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press (2011)

Add citations

You must login to add citations.
  1. Artificial Intelligence and the future of work.John-Stewart Gordon & David J. Gunkel - forthcoming - AI and Society:1-7.
    In this paper, we delve into the significant impact of recent advancements in Artificial Intelligence (AI) on the future landscape of work. We discuss the looming possibility of mass unemployment triggered by AI and the societal repercussions of this transition. Despite the challenges this shift presents, we argue that it also unveils opportunities to mitigate social inequalities, combat global poverty, and empower individuals to follow their passions. Amidst this discussion, we also touch upon the existential question of the purpose of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Download  
     
    Export citation  
     
    Bookmark  
  • Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Should we be thinking about sex robots?John Danaher - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press.
    The chapter introduces the edited collection Robot Sex: Social and Ethical Implications. It proposes a definition of the term 'sex robot' and examines some current prototype models. It also considers the three main ethical questions one can ask about sex robots: (i) do they benefit/harm the user? (ii) do they benefit/harm society? or (iii) do they benefit/harm the robot?
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Moral Enhancement and Moral Freedom: A Critique of the Little Alex Problem.John Danaher - 2018 - Royal Institute of Philosophy Supplement 83:233-250.
    A common objection to moral enhancement is that it would undermine our moral freedom and that this is a bad thing because moral freedom is a great good. Michael Hauskeller has defended this view on a couple of occasions using an arresting thought experiment called the 'Little Alex' problem. In this paper, I reconstruct the argument Hauskeller derives from this thought experiment and subject it to critical scrutiny. I claim that the argument ultimately fails because (a) it assumes that moral (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Is it good for them too? Ethical concern for the sexbots.Steve Petersen - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press. pp. 155-171.
    In this chapter I'd like to focus on a small corner of sexbot ethics that is rarely considered elsewhere: the question of whether and when being a sexbot might be good---or bad---*for the sexbot*. You might think this means you are in for a dry sermon about the evils of robot slavery. If so, you'd be wrong; the ethics of robot servitude are far more complicated than that. In fact, if the arguments here are right, designing a robot to serve (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Sex Work, Technological Unemployment and the Basic Income Guarantee.John Danaher - 2014 - Journal of Evolution and Technology 24 (1):113-130.
    Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • (1 other version)The philosophy of computer science.Raymond Turner - 2013 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Singularitarianism and Schizophrenia.Galanos Vasileios - 2016 - AI and Society:1-18.
    Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: (a) the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and (b) an explanatory analysis of the reasons bringing about such a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can we design artificial persons without being manipulative?Maciej Musiał - 2024 - AI and Society 39 (3):1251-1260.
    If we could build artificial persons (APs) with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants (AP servants) and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartlomiej Chomanski - 2022 - Science and Engineering Ethics 28 (6):1-17.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Service robots in the mirror of reflective research.Michael Decker - 2012 - Poiesis and Praxis 9 (3):181-200.
    Service robotics has increasingly become the focus of reflective research on new technologies over the last decade. The current state of technology is characterized by prototypical robot systems developed for specific application scenarios outside factories. This has enabled context-based Science and Technology Studies and technology assessments of service robotic systems. This contribution describes the status quo of this reflective research as the starting point for interdisciplinary technology assessment (TA), taking account of TA studies and, in particular, of publications from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations