Switch to: References

Add citations

You must login to add citations.
  1. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta, Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The Testimony Gap: Machines and Reasons.Robert Sparrow & Gene Flenady - 2025 - Minds and Machines 35 (1):1-16.
    Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2021 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Markus Kneer & Michael T. Stuart, Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)Narrative responsibility and artificial intelligence.Mark Coeckelbergh - 2023 - AI and Society 38 (6):2437-2450.
    Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • I, Volkswagen.Stephanie Collins - 2022 - Philosophical Quarterly 72 (2):283-304.
    Philosophers increasingly argue that collective agents can be blameworthy for wrongdoing. Advocates tend to endorse functionalism, on which collectives are analogous to complicated robots. This is puzzling: we don’t hold robots blameworthy. I argue we don’t hold robots blameworthy because blameworthiness presupposes the capacity for a mental state I call ‘moral self-awareness’. This raises a new problem for collective blameworthiness: collectives seem to lack the capacity for moral self-awareness. I solve the problem by giving an account of how collectives have (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Vicarious liability: a solution to a problem of AI responsibility?Matteo Pascucci & Daniela Glavaničová - 2022 - Ethics and Information Technology 24 (3):1-11.
    Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Narrative responsibility and artificial intelligence.Mark Coeckelbergh - 2021 - AI and Society:1-14.
    Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • We Have No Satisfactory Social Epistemology of AI-Based Science.Inkeri Koskinen - 2024 - Social Epistemology 38 (4):458-475.
    In the social epistemology of scientific knowledge, it is largely accepted that relationships of trust, not just reliance, are necessary in contemporary collaborative science characterised by relationships of opaque epistemic dependence. Such relationships of trust are taken to be possible only between agents who can be held accountable for their actions. But today, knowledge production in many fields makes use of AI applications that are epistemically opaque in an essential manner. This creates a problem for the social epistemology of scientific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • AI-Extended Moral Agency?Pii Telakivi, Tomi Kokkonen, Raul Hakli & Pekka Mäkelä - forthcoming - Social Epistemology.
    In this paper, we ask how ‘cognitive extenders’, based on AI technology, affect their users’ status as moral agents and the moral evaluation of their actions. We study how ‘AI-extenders’ can either enhance or diminish their users’ moral agency. On the one hand, they can broaden the scope of agential features and on the other hand, they can undermine the agent’s autonomy and lead to decreased responsibility. Our focus is on moral agency and responsibility of the AI-extended human being as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Agent: A Critical Rationalist Perspective.Alireza Mansouri - 2024 - Philosophia 52 (3).
    Despite the moral underpinnings of Karl Popper’s philosophy, he has not presented a well-established moral theory for critical rationalism (CR). This paper addresses the ontological status of _moral agents_ as part of a research program for developing a moral theory for CR. It argues that moral agents are _selves_ who have achieved the cognitive capacity of _personhood_ through an evolutionary scenario and interaction with the environment. This proposal draws on Popper’s theory of the self and his theory of three worlds, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Root of Algocratic Illegitimacy.Mikhail Volkov - 2025 - Philosophy and Technology 38 (2):1-15.
    Would a political system where the governance was overseen by an algorithmic system be legitimate? The intuitive answer seems to be no. This paper considers the philosophical effort to justify this intuition that argue for algocracy, a rule by algorithms, being illegitimate. Taking as the paradigmatic example the anti-algocratic argument from Danaher that attempts to ground algocratic illegitimacy in the opacity of algocratic decision-making, it is argued that the argument oversimplfies the matters. Opacity can delegitimise—but not simpliciter. It delegitimises because (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda - 2025 - AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Synthesizing Methuselah: The Question of Artificial Agelessness.Richard B. Gibson - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):60-75.
    As biological organisms, we age and, eventually, die. However, age’s deteriorating effects may not be universal. Some theoretical entities, due to their synthetic composition, could exist independently from aging—artificial general intelligence (AGI). With adequate resource access, an AGI could theoretically be ageless and would be, in some sense, immortal. Yet, this need not be inevitable. Designers could imbue AGIs with artificial mortality via an internal shut-off point. The question, though, is, should they? Should researchers curtail an AGI’s potentially endless lifespan (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Who Should obey Asimov’s Laws of Robotics? A Question of Responsibility.Maria Hedlund & Erik Persson - 2024 - In Spyridon Stelios & Kostas Theologou, The Ethics Gap in the Engineering of the Future. Emerald Publishing. pp. 9-25.
    The aim of this chapter is to explore the safety value of implementing Asimov’s Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Designing responsible agents.Zacharus Gudmunsen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Bullshit universities: the future of automated education.Robert Sparrow & Gene Flenady - forthcoming - AI and Society:1-12.
    The advent of ChatGPT, and the subsequent rapid improvement in the performance of what has become known as Generative AI, has led to many pundits declaring that AI will revolutionize education, as well as work, in the future. In this paper, we argue that enthusiasm for the use of AI in tertiary education is misplaced. A proper understanding of the nature of the outputs of AI suggests that it would be profoundly misguided to replace human teachers with AI, while the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “All AIs are Psychopaths”? The Scope and Impact of a Popular Analogy.Elina Nerantzi - 2025 - Philosophy and Technology 38 (1):1-24.
    Artificial Intelligence (AI) Agents are often compared to psychopaths in popular news articles. The headlines are ‘eye-catching’, but the questions of what this analogy means or why it matters are hardly answered. The aim of this paper is to take this popular analogy ‘seriously’. By that, I mean two things. First, I aim to explore the scope of this analogy, i.e. to identify and analyse the shared properties of AI agents and psychopaths, namely, their lack of moral emotions and their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz, Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen, Tekoäly, ihminen ja yhteiskunta. Helsinki: Gaudeamus.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reviewing the Case of Online Interpersonal Trust.Mirko Tagliaferri - 2023 - Foundations of Science 28 (1):225-254.
    The aim of this paper is to better qualify the problem of online trust. The problem of online trust is that of evaluating whether online environments have the proper design to enable trust. This paper tries to better qualify this problem by showing that there is no unique answer, but only conditional considerations that depend on the conception of trust assumed and the features that are included in the environments themselves. In fact, the major issue concerning traditional debates surrounding online (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ChatGPT: towards AI subjectivity.Kristian D’Amato - 2025 - AI and Society 40 (3):1627-1641.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to improvise: a philosophical account of the nature, scope and limits of improvisational agency.Steven Diggin - 2025 - Dissertation, University of British Columbia
    I develop an account of the nature of improvisation, as a distinctive form of temporally extended agency. In contrast to the standard view, which says that agents perform extended actions by means of planning them in advance, I argue that improvising involves planning one’s actions contemporaneously with their performance, or equivalently, planning these actions after one has already begun performing them. >> Improvisation is psychologically distinctive because it involves the adoption of backward-looking intentions, or retroplans, which represent the actions that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Kuinka ihmismieli vääristää keskustelua tekoälyn riskeistä ja etiikasta. Kognitiotieteellisiä näkökulmia keskusteluun.Michael Laakasuo, Aku Visala & Jussi Palomäki - 2020 - Ajatus 77 (1):131-168.
    Keskustelu tekoälyn soveltamiseen liittyvistä eettisistä ja poliittisista kysymyksistä käy juuri nyt kuumana. Emme halua tässä puheenvuorossa osallistua keskusteluun tarttumalla johonkin tiettyyn eettiseen ongelmaan. Sen sijaan pyrimme sanomaan jotain itsekeskustelusta ja sen vaikeudesta. Haluamme kiinnittää huomiota siihen, kuinka erilaiset ihmismielen ajattelutaipumukset ja virhepäätelmät voivat huomaamattamme vaikuttaa tapaamme hahmottaa ja ymmärtää tekoälyä ja siihen liittyviä eettisiä kysymyksiä. Kun ymmärrämme paremmin sen, kuinka hankalaa näiden kysymysten hahmottaminen arkisen mielemme kategorioilla oikein on, ja kun tunnistamme tästä syntyvät virhepäätelmät ja ajattelun vääristymät, kykenemme entistä korkeatasoisempaan (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt, Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agency: Autonomy and Evolution.Z. Gudmunsen - 2023 - Dissertation, University of Leeds
    This thesis aims to establish the possibility of, and a pathway to, artificial moral agents. Artificial moral agents are argued to be of value not just for their practical performance, but because they offer a non-human perspective that can be used to make human theories more objective. The thesis works to a definition of moral agency, arguing that moral agents need to be intentional, morally reasons-responsive, and autonomous, but not necessarily conscious. Then, applying this to artificial agents, it draws on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics Gap in the Engineering of the Future.Spyridon Stelios & Kostas Theologou (eds.) - 2024 - Emerald Publishing.
    The aim of this chapter is to explore the safety value of implementing Asimov’s Laws of Robotics as a future general framework that humans should obey. Asimov formulated laws to make explicit the safeguards of the robots in his stories: (1) A robot may not injure or harm a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given to it by human beings except where such orders would conflict (...)
    Download  
     
    Export citation  
     
    Bookmark