Switch to: References

Add citations

You must login to add citations.
  1. Thinking unwise: a relational u-turn.Nicholas Barrow - 2023 - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A metaphysical account of agency for technology governance.Sadjad Soltanzadeh - forthcoming - AI and Society:1-12.
    The way in which agency is conceptualised has implications for understanding human–machine interactions and the governance of technology, especially artificial intelligence (AI) systems. Traditionally, agency is conceptualised as a capacity, defined by intrinsic properties, such as cognitive or volitional facilities. I argue that the capacity-based account of agency is inadequate to explain the dynamics of human–machine interactions and guide technology governance. Instead, I propose to conceptualise agency as impact. Agents as impactful entities can be identified at different levels: from the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Terminology: Can We Use Human Terms to Describe AI?Ophelia Deroy - 2023 - Topoi 42 (3):881-889.
    Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Collective Agents as Moral Actors.Säde Hormio - forthcoming - In Säde Hormio & Bill Wringe (eds.), Collective Responsibility: Perspectives on Political Philosophy from Social Ontology. Springer.
    How should we make sense of praise and blame and other such reactions towards collective agents like governments, universities, or corporations? Collective agents can be appropriate targets for our moral feelings and judgements because they can maintain and express moral positions of their own. Moral agency requires being capable of recognising moral considerations and reasons. It also necessitates the ability to react reflexively to moral matters, i.e. to take into account new moral concerns when they arise. While members of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ectogestative Technology and the Beginning of Life.Lily Frank, Julia Hermann, Ilona Kavege & Anna Puzio - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 113–140.
    How could ectogestative technology disrupt gender roles, parenting practices, and concepts such as ‘birth’, ‘body’, or ‘parent’? In this chapter, we situate this emerging technology in the context of the history of reproductive technologies and analyse the potential social and conceptual disruptions to which it could contribute. An ectogestative device, better known as ‘artificial womb’, enables the extra-uterine gestation of a human being, or mammal more generally. It is currently developed with the main goal of improving the survival chances of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Domesticating Artificial Intelligence.Luise Müller - 2022 - Moral Philosophy and Politics 9 (2):219-237.
    For their deployment in human societies to be safe, AI agents need to be aligned with value-laden cooperative human life. One way of solving this “problem of value alignment” is to build moral machines. I argue that the goal of building moral machines aims at the wrong kind of ideal, and that instead, we need an approach to value alignment that takes seriously the categorically different cognitive and moral capabilities between human and AI agents, a condition I call deep agential (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why We Should Understand Conversational AI as a Tool.Marlies N. van Lingen, Noor A. A. Giesbertz, J. Peter van Tintelen & Karin R. Jongsma - 2023 - American Journal of Bioethics 23 (5):22-24.
    The introduction of chatGPT illustrates the rapid developments within Conversational Artificial Intelligence (CAI) technologies (Gordijn and Have 2023). Ethical reflection and analysis of CAI are c...
    Download  
     
    Export citation  
     
    Bookmark  
  • Could the destruction of a beloved robot be considered a hate crime? An exploration of the legal and social significance of robot love.Paula Sweeney - forthcoming - AI and Society:1-7.
    In the future, it is likely that we will form strong bonds of attachment and even develop love for social robots. Some of these loving relations will be, from the human’s perspective, as significant as a loving relationship that they might have had with another human. This means that, from the perspective of the loving human, the mindless destruction of their robot partner could be as devastating as the murder of another’s human partner. Yet, the loving partner of a robot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Applying AI for social good: Aligning academic journal ratings with the United Nations Sustainable Development Goals (SDGs).David Steingard, Marcello Balduccini & Akanksha Sinha - 2023 - AI and Society 38 (2):613-629.
    This paper offers three contributions to the burgeoning movements of AI for Social Good (AI4SG) and AI and the United Nations Sustainable Development Goals (SDGs). First, we introduce the SDG-Intense Evaluation framework (SDGIE) that aims to situate variegated automated/AI models in a larger ecosystem of computational approaches to advance the SDGs. To foster knowledge collaboration for solving complex social and environmental problems encompassed by the SDGs, the SDGIE framework details a benchmark structure of data-algorithm-output to effectively standardize AI approaches to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sentience, Vulcans, and Zombies: The Value of Phenomenal Consciousness.Joshua Shepherd - forthcoming - AI and Society:1-11.
    Many think that a specific aspect of phenomenal consciousness – valenced or affective experience – is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper I consider the prospects for these views. I first consider the prospects (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Joint Interaction and Mutual Understanding in Social Robotics.Sebastian Schleidgen & Orsolya Friedrich - 2022 - Science and Engineering Ethics 28 (6):1-20.
    Social robotics aims at designing robots capable of joint interaction with humans. On a conceptual level, sufficient mutual understanding is usually said to be a necessary condition for joint interaction. Against this background, the following questions remain open: in which sense is it legitimate to speak of human–robot joint interaction? What exactly does it mean to speak of humans and robots sufficiently understanding each other to account for human–robot joint interaction? Is such joint interaction effectively possible by reference, e.g., to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attention, moral skill, and algorithmic recommendation.Nick Schuster & Seth Lazar - forthcoming - Philosophical Studies:1-26.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the information (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Narrative autonomy and artificial storytelling.Silvia Pierosara - forthcoming - AI and Society:1-10.
    This article tries to shed light on the difference between human autonomy and AI-driven machine autonomy. The breadth of the studies concerning this topic is constantly increasing, and for this reason, this discussion is very narrow and limited in its extent. Indeed, its hypothesis is that it is possible to distinguish two kinds of autonomy by analysing the way humans and robots narrate stories and the types of stories that, respectively, result from their capability of narrating stories on their own. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Legal personhood for the integration of AI systems in the social context: a study hypothesis.Claudio Novelli - forthcoming - AI and Society:1-13.
    In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should criminal law protect love relation with robots?Kamil Mamak - forthcoming - AI and Society:1-10.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Military robots should not look like a humans.Kamil Mamak & Kaja Kowalczewska - 2023 - Ethics and Information Technology 25 (3):1-10.
    Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Ethics of AI Ethics. A Constructive Critique.Jan-Christoph Heilinger - 2022 - Philosophy and Technology 35 (3):1-20.
    The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Distributed responsibility in human–machine interactions.Anna Strasser - 2021 - AI and Ethics.
    Artificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of artificial agents) and raises (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations