Robot Ethics

Edited by Vincent C. Müller (Eindhoven University of Technology, University of Leeds)
View topic on PhilPapers for more information
Related categories

132 found
Order:
More results on PhilPapers
1 — 50 / 132
  1. Can a Robot Lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Mental Time-Travel, Semantic Flexibility, and A.I. Ethics.Marcus Arvan - forthcoming - AI and Society:1-20.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  4. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Moral Difference Between Humans and Robots: Paternalism and Human-Relative Reason.Tsung-Hsing Ho - forthcoming - AI and Society:1-11.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency. However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Alienation and Recognition - The Δ Phenomenology of the Human–Social Robot Interaction.Piercosma Bisconti & Antonio Carnevale - 2022 - Techné: Research in Philosophy and Technology 26 (1):147-171.
    A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI. In the first part of the paper, we will analyse the semantics of an HSRI. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2022 - Journal of Experimental and Theoretical Artificial Intelligence.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots.Joanna Karolina Malinowska - 2022 - International Journal of Social Robotics 14 (2):341–355.
    This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people’s reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. The Psychological Implications of Companion Robots: A Theoretical Framework and an Experimental Setup.Nicoletta Massa, Piercosma Bisconti & Daniele Nardi - 2022 - International Journal of Social Robotics (Online):1-14.
    In this paper we present a theoretical framework to understand the underlying psychological mechanism involved in human-Companion Robot interactions. At first, we take the case of Sexual Robotics, where the psychological dynamics are more evident, to thereafter extend the discussion to Companion Robotics in general. First, we discuss the differences between a sex-toy and a Sexual Robots, concluding that the latter may establish a collusive and confirmative dynamics with the user. We claim that the collusiveness leads to two main consequences, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Basic Issues in AI Policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. How Robots’ Unintentional Metacommunication Affects Human–Robot Interactions. A Systemic Approach.Piercosma Bisconti - 2021 - Minds and Machines 31 (4):487-504.
    In this paper, we theoretically address the relevance of unintentional and inconsistent interactional elements in human–robot interactions. We argue that elements failing, or poorly succeeding, to reproduce a humanlike interaction create significant consequences in human–robot relational patterns and may affect human–human relations. When considering social interactions as systems, the absence of a precise interactional element produces a general reshaping of the interactional pattern, eventually generating new types of interactional settings. As an instance of this dynamic, we study the absence of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. The Mandatory Ontology of Robot Responsibility.Marc Champagne - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):448–454.
    Do we suddenly become justified in treating robots like humans by positing new notions like “artificial moral agency” and “artificial moral responsibility”? I answer no. Or, to be more precise, I argue that such notions may become philosophically acceptable only after crucial metaphysical issues have been addressed. My main claim, in sum, is that “artificial moral responsibility” betokens moral responsibility to the same degree that a “fake orgasm” betokens an orgasm.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. The Unfounded Bias Against Autonomous Weapons Systems.Áron Dombrovszki - 2021 - Információs Társadalom 21 (2):13–28.
    Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition of "AWS." Then, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  17. Who Should Bear the Risk When Self-Driving Vehicles Crash?Antti Kauppinen - 2021 - Journal of Applied Philosophy 38 (4):630-645.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  20. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Robot Care Ethics Between Autonomy and Vulnerability: Coupling Principles and Practices in Autonomous Systems for Care.Alberto Pirni, Maurizio Balistreri, Steven Umbrello, Marianna Capasso & Federica Merenda - 2021 - Frontiers in Robotics and AI 8 (654298):1-11.
    Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach.Steven Umbrello - 2021 - Ethics and Information Technology 23 (3):455-464.
    The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - 2021 - Minds and Machines 31 (3):395-419.
    Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design approach to technology design, this paper extends its application to care robots by integrating the values of care, values that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status.Steven Umbrello & Nathan Gabriel Wood - 2021 - Information 12 (5):216.
    Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving evermore attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected with emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. The Emperor is Naked: Moral Diplomacies and the Ethics of AI.Constantin Vica, Cristina Voinea & Radu Uszkai - 2021 - Információs Társadalom 21 (2):83-96.
    With AI permeating our lives, there is widespread concern regarding the proper framework needed to morally assess and regulate it. This has given rise to many attempts to devise ethical guidelines that infuse guidance for both AI development and deployment. Our main concern is that, instead of a genuine ethical interest for AI, we are witnessing moral diplomacies resulting in moral bureaucracies battling for moral supremacy and political domination. After providing a short overview of what we term ‘ethics washing’ in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Sexual Robots: The Social-Relational Approach and the Concept of Subjective Reference.Piercosma Bisconti & Susanna Piermattei - 2020 - Lecture Notes in Computer Science.
    In this paper we propose the notion of “subjective reference” as a conceptual tool that explains how and why human-robot sexual interactions could reframe users approach to human-human sexual interactions. First, we introduce the current debate about Sexual Robotics, situated in the wider discussion about Social Robots, stating the urgency of a regulative framework. We underline the importance of a social-relational approach, mostly concerned about Social Robots impact in human social structures. Then, we point out the absence of a precise (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   35 citations  
  30. Robot Betrayal: A Guide to the Ethics of Robotic Deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  31. Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. 反思機器人的道德擬人主義.Tsung-Hsing Ho - 2020 - EurAmerica 50 (2):179-205.
    如果機器人的發展要能如科幻想像一般,在沒有人類監督下自動地工作,就必須確定機器人不會做出道德上錯誤的行為。 根據行為主義式的道德主體觀,若就外顯行為來看,機器人在道德上的表現跟人類一般,機器人就可被視為道德主體。從這很自然地引伸出機器人的道德擬人主義:凡適用於人類的道德規則就適用於機器人。我反對道德擬人主義 ,藉由史特勞森對於人際關係與反應態度的洞見,並以家長主義行為為例,我論述由於機器人缺乏人格性,無法參與人際關係,因此在關於家長主義行為上,機器人應該比人類受到更嚴格的限制。.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  33. Value-Oriented and Ethical Technology Engineering in Industry 5.0: A Human-Centric Perspective for the Design of the Factory of the Future.Francesco Longo, Antonio Padovano & Steven Umbrello - 2020 - Applied Sciences 10 (12):4182.
    Manufacturing and industry practices are undergoing an unprecedented revolution as a consequence of the convergence of emerging technologies such as artificial intelligence, robotics, cloud computing, virtual and augmented reality, among others. This fourth industrial revolution is similarly changing the practices and capabilities of operators in their industrial environments. This paper introduces and explores the notion of the Operator 4.0 as well as how this novel way of conceptualizing the human operator necessarily implicates human values in the technologies that constitute it. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  34. Human Rights of Users of Humanlike Care Automata.Lantz Fleming Miller - 2020 - Human Rights Review 21 (2):181-205.
    Care is more than dispensing pills or cleaning beds. It is about responding to the entire patient. What is called “bedside manner” in medical personnel is a quality of treating the patient not as a mechanism but as a being—much like the caregiver—with desires, ideas, dreams, aspirations, and the gamut of mental and emotional character. As automata, answering an increasing functional need in care, are designed to enact care, the pressure is on their becoming more humanlike to carry out the (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  35. Responsible Research for the Construction of Maximally Humanlike Automata: The Paradox of Unattainable Informed Consent.Lantz Fleming Miller - 2020 - Ethics and Information Technology 22 (4):297-305.
    Since the Nuremberg Code and the first Declaration of Helsinki, globally there has been increasing adoption and adherence to procedures for ensuring that human subjects in research are as well informed as possible of the study’s reasons and risks and voluntarily consent to serving as subject. To do otherwise is essentially viewed as violation of the human research subject’s legal and moral rights. However, with the recent philosophical concerns about responsible robotics, the limits and ambiguities of research-subjects ethical codes become (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   23 citations  
  37. Technological Displacement and the Duty to Increase Living Standards: From Left to Right.Howard Nye - 2020 - International Review of Information Ethics 28:1-16.
    Many economists have argued convincingly that automated systems employing present-day artificial intelligence have already caused massive technological displacement, which has led to stagnant real wages, fewer middle- income jobs, and increased economic inequality in developed countries like Canada and the United States. To address this problem various individuals have proposed measures to increase workers’ living standards, including the adoption of a universal basic income, increased public investment in education, increased minimum wages, increased worker control of firms, and investment in a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. "Jewish Law, Techno-Ethics, and Autonomous Weapon Systems: Ethical-Halakhic Perspectives".Nadav S. Berman - 2020 - Jewish Law Association Studies 29:91-124.
    Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous Weapon Systems (AWS), defensive and offensive (the article deals only with the latter). Such AI-operated lethal machines of various forms (aerial, marine, continental) raise substantial ethical concerns. Interestingly, the topic of AWS was almost not treated in Jewish law and its research. This article thus proposes an introductory ethical-halakhic perspective on AWS, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. ETHICA EX MACHINA. Exploring Artificial Moral Agency or the Possibility of Computable Ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2020 - In Ethics of Artificial Intelligence. New York, NY, USA: pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Autonomous Weapons Systems and the Moral Equality of Combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 3 (6).
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided weaponry. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Values, Imagination, and Praxis : Towards a Value Sensitive Future with Technology. [REVIEW]Steven Umbrello - 2020 - Science and Engineering Ethics 26 (1):495-499.
    A new book by Batya Friedman and David G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination, is reviewed. Value Sensitive Design is a project into the ethical and design issues that emerge during the engineering programs of new technologies. This book is intended to build on the over two decades of value sensitive design research, however with a greater emphasis on the developments of the theoretical underpinnings of the approach as well as initial steps that designers can employ (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Maurizio Balistreri, Sex Robot. L’Amore Al Tempo Delle Macchine. [REVIEW]Steven Umbrello - 2020 - Filosofia 2020 (65):191-193.
    A new book by Maurizio Balistreri, "Sex robot. L’amore al tempo delle macchine", is reviewed. Sex robots not only exacerbate social, ethical and cultural issues that already exist, but also come with emergent and novel ones. This book is intended to build on the recent research on both robotics and the growing scholarship on sex robots more generally, however with greater attention to the developments of the philosophical issues of how to deal with these new artefacts and steps for living (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Meaningful Human Control Over Smart Home Systems: A Value Sensitive Design Approach.Steven Umbrello - 2020 - Humana.Mente Journal of Philosophical Studies 13 (37):40-65.
    The last decade has witnessed the mass distribution and adoption of smart home systems and devices powered by artificial intelligence systems ranging from household appliances like fridges and toasters to more background systems such as air and water quality controllers. The pervasiveness of these sociotechnical systems makes analyzing their ethical implications necessary during the design phases of these devices to ensure not only sociotechnical resilience, but to design them for human values in mind and thus preserve meaningful human control over (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  45. The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Modeling Artificial Agents’ Actions in Context – a Deontic Cognitive Event Ontology.Miroslav Vacura - 2020 - Applied Ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology containing facts about (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Empathy and Instrumentalization: Late Ancient Cultural Critique and the Challenge of Apparently Personal Robots.Jordan Joseph Wales - 2020 - In Marco Nørskov, Johanna Seibt & Oliver Santiago Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam: IOS Press. pp. 114-124.
    According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. How Should Autonomous Vehicles Redistribute the Risks of the Road?Brian Berkey - 2019 - Wharton Public Policy Initiative Issue Brief 7 (9):1-6.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 132