Switch to: References

Add citations

You must login to add citations.
  1. Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha, Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Against Corporate Responsibility.Lars J. K. Moen - 2024 - Journal of Social Philosophy 55 (1):44–61.
    Can a group be morally responsible instead of, or in addition to, its members? An influential defense of corporate responsibility is based on results in social choice theory suggesting that a group can form and act on attitudes held by few, or even none, of its members. The members therefore cannot be (fully) responsible for the group’s behavior; the group itself, as a corporate agent, must be responsible. In this paper, I reject this view of corporate responsibility by showing how (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Treating people as individuals and as members of groups.Lauritz Aastrup Munch & Nicolai Knudsen - 2024 - Philosophy and Phenomenological Research 110 (1):253-272.
    Many believe that we ought to treat people as individuals and that this form of treatment is in some sense incompatible with treating people as members of groups. Yet, the relation between these two kinds of treatments is elusive. In this paper, we develop a novel account of the normative requirement to treat people as individuals. According to this account, treating people as individuals requires treating people as agents in the appropriate capacity. We call this the Agency Attunement Account. This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can AI systems have free will?Christian List - manuscript
    While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been very little discussion of whether AI systems could have free will. I sketch a framework for thinking about this question, inspired by Daniel Dennett’s work. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent?Sven Nyholm - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):76-88.
    This paper discusses two opposing views about the relation between artificial intelligence (AI) and human intelligence: on the one hand, a worry that heavy reliance on AI technologies might make people less intelligent and, on the other, a hope that AI technologies might serve as a form of cognitive enhancement. The worry relates to the notion that if we hand over too many intelligence-requiring tasks to AI technologies, we might end up with fewer opportunities to train our own intelligence. Concerning (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2021 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Groups as fictional agents.Lars J. K. Moen - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (3):1049–1068.
    Can groups really be agents or is group agency just a fiction? Christian List and Philip Pettit argue influentially for group-agent realism by showing how certain groups form and act on attitudes in ways they take to be unexplainable at the level of the individual agents constituting them. Group agency is therefore considered not a fiction or a metaphor but a reality we must account for in explanations of certain social phenomena. In this paper, I challenge this defence of group-agent (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Actionless Agent: An Account of Human-CAI Relationships.Charles E. Binkley & Bryan Pilkington - 2023 - American Journal of Bioethics 23 (5):25-27.
    We applaud Sedlakova and Trachsel’s work and their description of conversational artificial intelligence (CAI) as possessing a hybrid nature with features of both a tool and an agent (Sedlakova and...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Desire-Fulfilment and Consciousness.Andreas Mogensen - manuscript
    I show that there are good reasons to think that some individuals without any capacity for consciousness should be counted as welfare subjects, assuming that desire-fulfilment is a welfare good and that any individuals who can accrue welfare goods are welfare subjects. While other philosophers have argued for similar conclusions, I show that they have done so by relying on a simplistic understanding of the desire-fulfilment theory. My argument is intended to be sensitive to the complexities and nuances of contemporary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Group Responsibility.Christian List - 2022 - In Dana Kay Nelkin & Derk Pereboom, The Oxford Handbook of Moral Responsibility. New York: Oxford University Press.
    Are groups ever capable of bearing responsibility, over and above their individual members? This chapter discusses and defends the view that certain organized collectives – namely, those that qualify as group moral agents – can be held responsible for their actions, and that group responsibility is not reducible to individual responsibility. The view has important implications. It supports the recognition of corporate civil and even criminal liability in our legal systems, and it suggests that, by recognizing group agents as loci (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Design for operator contestability: control over autonomous systems by introducing defeaters.Herman Veluwenkamp & Stefan Buijsman - 2025 - AI and Ethics 1.
    This paper introduces the concept of Operator Contestability in AI systems: the principle that those overseeing AI systems (operators) must have the necessary control to be accountable for the decisions made by these algorithms. We argue that designers have a duty to ensure operator contestability. We demonstrate how this duty can be fulfilled by applying the'Design for Defeaters' framework, which provides strategies to embed tools within AI systems that enable operators to challenge decisions. Defeaters are designed to contest either the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Collective Agency and Positive Political Theory.Lars Moen - 2024 - Journal of Theoretical Politics 36 (1):83–98.
    Positive political theorists typically deny the possibility of collective agents by understanding aggregation problems to imply that groups are not rational decision-makers. This view contrasts with List and Pettit’s view that such problems actually imply the necessity of accounting for collective agents in explanations of group behaviour. In this paper, I explore these conflicting views and ask whether positive political theorists should alter their individualist analyses of groups like legislatures, political parties, and constituent assemblies. I show how we fail to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uncovering the gap: challenging the agential nature of AI responsibility problems.Joan Llorca Albareda - 2025 - AI and Ethics:1-14.
    In this paper, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies. Depending on which conception of AI agency we begin with, the responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis.Luciano Floridi - 2025 - Philosophy and Technology 38 (1):1-27.
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by applying independently (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What kinds of groups are group agents?Jimmy Lewis-Martin - 2022 - Synthese 200 (4):1-19.
    For a group to be an agent, it must be individuated from its environment and other systems. It must, in other words, be an individual. Despite the central importance of individuality for understanding group agency, the concept has been significantly overlooked. I propose to fill this gap in our understanding of group individuality by arguing that agents are autonomous as it is commonly understood in the enactive literature. According to this autonomous individuation account, an autonomous system is one wherein the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Complexity and Particularity: An Argument for the Impossibility of Artificial Intelligence.Emanuele Martinelli - 2024 - Cosmos+Taxis 12 (5+6):42-57.
    Landgrebe and Smith (2022) have recently offered an important mathematical argument against the possibility of Artificial General Intelligence (AGI): human intelligence is a complex system; complex systems have some properties that cannot be modelled mathematically; hence we have no viable way to build an AI that would be able to emulate human intelligence. The issue of complexity is thus at the heart of the Landgrebe and Smith approach, and they tackle this issue by postulating a set of conditions, derived from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • No Wellbeing for Robots (and Hence no Rights).Peter Königs - 2025 - American Philosophical Quarterly 62 (2):191-208.
    A central question in AI ethics concerns the moral status of robots. This article argues against the idea that they have moral status. It proceeds by defending the assumption that consciousness is necessary for welfare subjectivity. Since robots most likely lack consciousness, and welfare subjectivity is necessary for moral status, it follows that robots lack moral status. The assumption that consciousness is necessary for welfare subjectivity appears to be in tension with certain widely accepted theories of wellbeing, especially versions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gaps and Technology: Old Wine in New Bottles?Ann-Katrien Oimann & Fabio Tollon - 2025 - Journal of Applied Philosophy 42 (1):337-356.
    Recent work in philosophy of technology has come to bear on the question of responsibility gaps. Some authors argue that the increase in the autonomous capabilities of decision-making systems makes it impossible to properly attribute responsibility for AI-based outcomes. In this article we argue that one important, and often neglected, feature of recent debates on responsibility gaps is how this debate maps on to old debates in responsibility theory. More specifically, we suggest that one of the key questions that is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When to Fill Responsibility Gaps: A Proposal.Michael Da Silva - forthcoming - Journal of Value Inquiry:1-26.
    Download  
     
    Export citation  
     
    Bookmark  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The moral decision machine: a challenge for artificial moral agency based on moral deference.Z. Gudmunsen - 2024 - AI and Ethics.
    Humans are responsible moral agents in part because they can competently respond to moral reasons. Several philosophers have argued that artificial agents cannot do this and therefore cannot be responsible moral agents. I present a counterexample to these arguments: the ‘Moral Decision Machine’. I argue that the ‘Moral Decision Machine’ responds to moral reasons just as competently as humans do. However, I suggest that, while a hopeful development, this does not warrant strong optimism about ‘artificial moral agency’. The ‘Moral Decision (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems.Dario Cecchini & Veljko Dubljević - 2025 - Science and Engineering Ethics 31 (1):1-17.
    The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders’ trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Man Behind the Curtain: Appropriating Fairness in AI.Marcin Korecki, Guillaume Köstner, Emanuele Https://Orcidorg Martinelli & Cesare Carissimo - 2024 - Minds and Machines 34 (1):1-30.
    Our goal in this paper is to establish a set of criteria for understanding the meaning and sources of attributing (un)fairness to AI algorithms. To do so, we first establish that (un)fairness, like other normative notions, can be understood in a proper primary sense and in secondary senses derived by analogy. We argue that AI algorithms cannot be said to be (un)fair in the proper sense due to a set of criteria related to normativity and agency. However, we demonstrate how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptualizing Automated Decision-Making in Organizational Contexts.Anna Katharina Boos - 2024 - Philosophy and Technology 37 (3):1-30.
    Despite growing interest in automated (or algorithmic) decision-making (ADM), little work has been done to conceptually clarify the term. This article aims to tackle this issue by developing a conceptualization of ADM specifically tailored to organizational contexts. It has two main goals: (1) to meaningfully demarcate ADM from similar, yet distinct algorithm-supported practices; and (2) to draw internal distinctions such that different ADM types can be meaningfully distinguished. The proposed conceptualization builds on three arguments: First, ADM primarily refers to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.Brandon Ferlito, Seppe Segers, Michiel De Proost & Heidi Mertes - 2024 - Science and Engineering Ethics 30 (4):1-14.
    Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a patient(s), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An agent-based approach to the limits of economic planning.Emanuele Martinelli - forthcoming - AI and Society:1-13.
    Mises’ and Hayek’s arguments against central economic planning have long been taken as definitive proof that a centrally planned economy managed by the government would be impossible. Today, however, the exponential rise in the capacities of AI has opened up the possibility that supercomputers could have what it takes to plan the national economy. The ‘economic calculation debate’ has thus reignited. Arguably, this is because neither Mises nor Hayek have given a clear and conclusive argument why central planning of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context.Sarah Bouhouita-Guermech & Hazar Haidar - 2024 - Asian Bioethics Review 16 (3):315-344.
    The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - 2024 - Science and Engineering Ethics 30 (6):1-19.
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unpredictable robots elicit responsibility attributions.Matija Franklin, Edmond Awad, Hal Ashton & David Lagnado - 2023 - Behavioral and Brain Sciences 46:e30.
    Do people hold robots responsible for their actions? While Clark and Fischer present a useful framework for interpreting social robots, we argue that they fail to account for people's willingness to assign responsibility to robots in certain contexts, such as when a robot performs actions not predictable by its user or programmer.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...)
    Download  
     
    Export citation  
     
    Bookmark