Contents
186 found
Order:
1 — 50 / 186
  1. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Impossibility of Artificial Inventors.Matt Blaszczyk - 2024 - Hastings Sci. And Tech. L.J 16:73.
    Recently, the United Kingdom Supreme Court decided that only natural persons can be considered inventors. A year before, the United States Court of Appeals for the Federal Circuit issued a similar decision. In fact, so have many the courts all over the world. This Article analyses these decisions, argues that the courts got it right, and finds that artificial inventorship is at odds with patent law doctrine, theory, and philosophy. The Article challenges the intellectual property (IP) post-humanists, exposing the analytical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to maximize (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Engineered Wisdom for Learning Machines.Brett Karlan & Colin Allen - 2024 - Journal of Experimental and Theoretical Artificial Intelligence 36 (2):257-272.
    We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. The Trolley Problem and Isaac Asimov’s First Law of Robotics.Erik Persson & Maria Hedlund - 2024 - Journal of Science Fiction and Philosophy 7.
    How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. We find it curious that a set of fictional laws can have such a strong impact on discussions about a real-world problem and we think this needs to be looked into. The probably most common phrase in connection with robotic and AI ethics, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. A NAO Robot Performing Religious Practices.Anna Puzio - 2024 - ET-Studies 15 (1):129-140.
    In Sect. 2, I will introduce what religious robots are and present examples of such robots. Then, in Sect. 3, I will discuss my project with a NAO robot at the Katholikentag. In Sect. 4, I will discuss anthropological and ethical questions related to religious robots. Thus, I will outline the direction in which research on religious robots can go, where the challenges lie, and highlight two key advantages. Finally, in Sect. 5, I conclude with an outlook for future research (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Towards an Eco-Relational Approach: Relational Approaches Must Be Applied in Ethics and Law.Anna Puzio - 2024 - Philosophy and Technology 37 (67):1-5.
    Relational approaches are gaining more and more importance in philosophy of tech-nology. This brings up the critical question of how they can be implemented in applied ethics, law, and practice. In “Extremely Relational Robots: Implications for Law and Ethics”, Nancy S. Jecker (2024) comments on my article “Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics” (Puzio, 2024), in which I present a deep relational, “eco-relational approach”. In this reply, I address two of Jecker’s criticisms: in section. 3, I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. IT & C, Volumul 3, Numărul 3, Septembrie 2024.Nicolae Sfetcu - 2024 - It and C 3 (3).
    Revista IT & C este o publicație trimestrială din domeniile tehnologiei informației și comunicații, și domenii conexe de studiu și practică. -/- Cuprins: -/- EDITORIAL / EDITORIAL -/- Tools Used in AI Development – The Turing Test Instrumente utilizate în dezvoltarea IA – Testul Turing -/- TEHNOLOGIA INFORMAȚIEI / INFORMATION TECHNOLOGY -/- Trends in the Evolution of Artificial Intelligence – Intelligent Agents Tendințe în evoluția inteligenței artificiale – Agenți inteligenți -/- TELECOMUNICAȚII / TELECOMMUNICATIONS -/- Security in 5G Telecommunications Networks with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Reframing Deception for Human-Centered AI.Steven Umbrello & Simone Natale - 2024 - International Journal of Social Robotics 16 (11-12):2223–2241.
    The philosophical, legal, and HCI literature concerning artificial intelligence (AI) has explored the ethical implications and values that these systems will impact on. One aspect that has been only partially explored, however, is the role of deception. Due to the negative connotation of this term, research in AI and Human–Computer Interaction (HCI) has mainly considered deception to describe exceptional situations in which the technology either does not work or is used for malicious purposes. Recent theoretical and historical work, however, has (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  22. Do You Follow?: A Fully Automated System for Adaptive Robot Presenters.Agnes Axelsson & Gabriel Skantze - 2023 - Hri '23: Proceedings of the 2023 Acm/Ieee International Conference on Human-Robot Interaction 23:102-111.
    An interesting application for social robots is to act as a presenter, for example as a museum guide. In this paper, we present a fully automated system architecture for building adaptive presentations for embodied agents. The presentation is generated from a knowledge graph, which is also used to track the grounding state of information, based on multimodal feedback from the user. We introduce a novel way to use large-scale language models (GPT-3 in our case) to lexicalise arbitrary knowledge graph triples, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Robot Ethics. Mark Coeckelbergh (2022). Cambridge, MIT Press. vii + 191 pp, $16.95 (pb). [REVIEW]Nicholas Barrow - 2023 - Journal of Applied Philosophy (5):970-972.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a war crime. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Les revendications de droits pour les robots : constructions et conflits autour d’une éthique de la robotique.Charles Corval - 2023 - Implications Philosophiques.
    Ce travail examine les revendications contemporaines de droits pour les robots. Il présente les principales formes argumentatives qui ont été développées en faveur d’une considération éthique ou de droits positifs pour ces machines. Il met en relation ces argumentations avec un travail de recherche-action afin de produire un retour critique sur l’idée de droit des robots. Il montre enfin le rapport complexe entre les récits de la modernité et la revendication de droits pour les robots. This article presents contemporary vindications (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider public (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Robots, Rebukes, and Relationships: Confucian Ethics and the Study of Human-Robot Interactions.Alexis Elder - 2023 - Res Philosophica 100 (1):43-62.
    The status and functioning of shame is contested in moral psychology. In much of anglophone philosophy and psychology, it is presumed to be largely destructive, while in Confucian philosophy and many East Asian communities, it is positively associated with moral development. Recent work in human-robot interaction offers a unique opportunity to investigate how shame functions while controlling for confounding variables of interpersonal interaction. One research program suggests a Confucian strategy for using robots to rebuke participants, but results from experiments with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. What Confucian Ethics Can Teach Us About Designing Caregiving Robots for Geriatric Patients.Alexis Elder - 2023 - Digital Society 2 (1).
    Caregiving robots are often lauded for their potential to assist with geriatric care. While seniors can be wise and mature, possessing valuable life experience, they can also present a variety of ethical challenges, from prevalence of racism and sexism, to troubled relationships, histories of abusive behavior, and aggression, mood swings and impulsive behavior associated with cognitive decline. I draw on Confucian ethics, especially the concept of filial piety, to address these issues. Confucian scholars have developed a rich set of theoretical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Connected and Automated Vehicles: Integrating Engineering and Ethics.Fabio Fossa & Federico Cheli (eds.) - 2023 - Cham: Springer.
    This book reports on theoretical and practical analyses of the ethical challenges connected to driving automation. It also aims at discussing issues that have arisen from the European Commission 2020 report “Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility”. Gathering contributions by philosophers, social scientists, mechanical engineers, and UI designers, the book discusses key ethical concerns relating to responsibility and personal autonomy, privacy, safety, and cybersecurity, as well as explainability and human-machine interaction. On (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  35. Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Challenges for ‘Community’ in Science and Values: Cases from Robotics Research.Charles H. Pence & Daniel J. Hicks - 2023 - Humana.Mente Journal of Philosophical Studies 16 (44):1-32.
    Philosophers of science often make reference — whether tacitly or explicitly — to the notion of a scientific community. Sometimes, such references are useful to make our object of analysis tractable in the philosophy of science. For others, tracking or understanding particular features of the development of science proves to be tied to notions of a scientific community either as a target of theoretical or social intervention. We argue that the structure of contemporary scientific research poses two unappreciated, or at (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Authenticity and co-design: On responsibly creating relational robots for children.Milo Phillips-Brown, Marion Boulicault, Jacqueline Kory-Westland, Stephanie Nguyen & Cynthia Breazeal - 2023 - In Mizuko Ito, Remy Cross, Karthik Dinakar & Candice Odgers (eds.), Algorithmic Rights and Protections for Children. MIT Press. pp. 85-121.
    Meet Tega. Blue, fluffy, and AI-enabled, Tega is a relational robot: a robot designed to form relationships with humans. Created to aid in early childhood education, Tega talks with children, plays educational games with them, solves puzzles, and helps in creative activities like making up stories and drawing. Children are drawn to Tega, describing him as a friend, and attributing thoughts and feelings to him ("he's kind," "if you just left him here and nobody came to play with him, he (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. When the Digital Continues After Death Ethical Perspectives on Death Tech and the Digital Afterlife.Anna Puzio - 2023 - Communicatio Socialis 56 (3):427-436.
    Nothing seems as certain as death. However, what if life continues digitally after death? Companies and initiatives such as Amazon, Storyfile, Here After AI, Forever Identity and LifeNaut are dedicated to precisely this objective: using avatars, records, and other digital content of the deceased, they strive to enable a digital continuation of life. The deceased live on digitally, and at times, these can even appear very much alive-perhaps too alive? This article explores the ethical implications of these technologies, commonly known (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The Icon and the Idol: A Christian Perspective on Sociable Robots.Jordan Joseph Wales - 2023 - In Jens Zimmermann (ed.), Human Flourishing in a Technological World: A Theological Perspective. Oxford University Press. pp. 94-115.
    Consulting early and medieval Christian thinkers, I theologically analyze the question of how we are to construe and live well with the sociable robot under the ancient theological concept of “glory”—the manifestation of God’s nature and life outside of himself. First, the oft-noted Western wariness toward robots may in part be rooted in protecting a certain idea of the “person” as a relational subject capable of self-gift. Historically, this understanding of the person derived from Christian belief in God the Trinity, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. A Kantian Course Correction for Machine Ethics.Ava Thomas Wright - 2023 - In Gregory Robson & Jonathan Y. Tsou (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York, NY, USA: Routledge. pp. 141-151.
    The central challenge of “machine ethics” is to build autonomous machine agents that act morally rightly. But how can we build autonomous machine agents that act morally rightly, given reasonable disputes over what is right and wrong in particular cases? In this chapter, I argue that Immanuel Kant’s political philosophy can provide an important part of the answer.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. AWS compliance with the ethical principle of proportionality: three possible solutions.Maciek Zając - 2023 - Ethics and Information Technology 25 (1):1-13.
    The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Alienation and Recognition - The Δ Phenomenology of the Human–Social Robot Interaction.Piercosma Bisconti & Antonio Carnevale - 2022 - Techné: Research in Philosophy and Technology 26 (1):147-171.
    A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI. In the first part of the paper, we will analyse the semantics of an HSRI. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Algoritmi e processo decisionale. Alle origini della riflessione etico-pratica per le IA.Cristiano Calì - 2022 - Sandf_ Scienzaefilosofia.It.
    This contribution aims to investigate not so much the ethical implications of utilizing intelligent machines in specific contexts, (human resources, self-driving cars, robotic hospital assistants, et cetera), but the premises of their widespread use. In particular, it aims to analyze the complex concept of decision making - the cornerstone of any ethical argument - from a dual point of view: decision making assigned to machines and decision making enacted by machines. Analyzing the role of algorithms in decision making, we suggest (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  48. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. Bridging East-West Differences in Ethics Guidance for AI and Robots.Nancy S. Jecker & Eisuke Nakazawa - 2022 - AI 3 (3):764-777.
    Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the East, while Europe and North America serve as examples of the West. The paper’s principal aim is to demonstrate that Western values predominate in international ethics guidance and that Japanese values serve as a much-needed corrective. We recommend (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  50. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 186