Switch to: References

Add citations

You must login to add citations.
  1. Just War and Robots’ Killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
    The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Service robots in the mirror of reflective research.Michael Decker - 2012 - Poiesis and Praxis 9 (3):181-200.
    Service robotics has increasingly become the focus of reflective research on new technologies over the last decade. The current state of technology is characterized by prototypical robot systems developed for specific application scenarios outside factories. This has enabled context-based Science and Technology Studies and technology assessments of service robotic systems. This contribution describes the status quo of this reflective research as the starting point for interdisciplinary technology assessment (TA), taking account of TA studies and, in particular, of publications from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain-computer interfaces.Gerd Grübler - 2011 - AI and Society 26 (4):377-382.
    The article shows where the argument of responsibility-gap regarding brain-computer interfaces acquires its plausibility from, and suggests why the argument is not plausible. As a way of an explanation, a distinction between the descriptive third-person perspective and the interpretative first-person perspective is introduced. Several examples and metaphors are used to show that ascription of agency and responsibility does not, even in simple cases, require that people be in causal control of every individual detail involved in an event. Taking up the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Information Warfare: A Philosophical Perspective. [REVIEW]Mariarosaria Taddeo - 2012 - Philosophy and Technology 25 (1):105-120.
    This paper focuses on Information Warfare—the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Committing Crimes with BCIs: How Brain-Computer Interface Users can Satisfy Actus Reus and be Criminally Responsible.Kramer Thompson - 2021 - Neuroethics 14 (S3):311-322.
    Brain-computer interfaces allow agents to control computers without moving their bodies. The agents imagine certain things and the brain-computer interfaces read the concomitant neural activity and operate the computer accordingly. But the use of brain-computer interfaces is problematic for criminal law, which requires that someone can only be found criminally responsible if they have satisfied the actus reus requirement: that the agent has performed some (suitably specified) conduct. Agents who affect the world using brain-computer interfaces do not obviously perform any (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The impact of digital health technologies on moral responsibility: a scoping review.E. Meier, T. Rigter, M. P. Schijven, M. van den Hoven & M. A. R. Bak - forthcoming - Medicine, Health Care and Philosophy:1-15.
    Recent publications on digital health technologies highlight the importance of ‘responsible’ use. References to the concept of responsibility are, however, frequently made without providing clear definitions of responsibility, thus leaving room for ambiguities. Addressing these uncertainties is critical since they might lead to misunderstandings, impacting the quality and safety of healthcare delivery. Therefore, this study investigates how responsibility is interpreted in the context of using digital health technologies, including artificial intelligence (AI), telemonitoring, wearables and mobile apps. We conducted a scoping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative Artificial Intelligence and Authorship Gaps.Tamer Nawar - 2024 - American Philosophical Quarterly 61 (4):355-367.
    The ever increasing use of generative artificial intelligence raises significant questions about authorship and related issues such as credit and accountability. In this paper, I consider whether works produced by means of users inputting natural language prompts into Generative Adversarial Networks are works of authorship. I argue that they are not. This is not due to concerns about randomness or machine-assistance compromising human labor or intellectual vision, but instead due to the syntactical and compositional limitations of existing AI systems in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An Ellulian analysis of propaganda in the context of generative AI.Xiaomei Bi, Xingyuan Su & Xiaoyan Liu - 2024 - Ethics and Information Technology 26 (3):1-11.
    The application of generative artificial intelligence (GenAI) technologies in the field of propaganda influences information creation, dissemination, and reception, and introduces new ethical challenges. This paper revisits the philosophical discourses of Jacques Ellul on technology and propaganda, placing them within the context of the rise of today’s generative AI technologies. Ellul identified the First Industrial Revolution as the initial juncture in the history of human technology that formed technique as a social phenomenon, which subsequently shaped the nature of propaganda as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Impactful Conceptual Engineering: Designing Technological Artefacts Ethically.Herman Veluwenkamp - forthcoming - Ethical Theory and Moral Practice:1-16.
    Conceptual engineering is the design, evaluation and implementation of concepts. Despite its popularity, some have argued that the methodology is not worthwhile, because the implementation of new concepts is both inscrutable and beyond our control. In the recent literature we see different responses to this worry. Some have argued that it is for political reasons just as well that implementation is such a difficult task, while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  • An Anticipatory Approach to Ethico-Legal Implications of Future Neurotechnology.Stephen Rainey - 2024 - Science and Engineering Ethics 30 (3):1-15.
    This paper provides a justificatory rationale for recommending the inclusion of imagined future use cases in neurotechnology development processes, specifically for legal and policy ends. Including detailed imaginative engagement with future applications of neurotechnology can serve to connect ethical, legal, and policy issues potentially arising from the translation of brain stimulation research to the public consumer domain. Futurist scholars have for some time recommended approaches that merge creative arts with scientific development in order to theorise possible futures toward which current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conceptual Engineering: For What Matters.Sebastian Köhler & Herman Veluwenkamp - 2024 - Mind 133 (530):400-427.
    Conceptual engineering is the enterprise of evaluating and improving our representational devices. But how should we conduct this enterprise? One increasingly popular answer to this question proposes that conceptual engineering should proceed in terms of the functions of our representational devices. In this paper, we argue that the best way of understanding this suggestion is in terms of normative functions, where normative functions of concepts are, roughly, things that they allow us to do that matter normatively (for example, things in (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)Narrative responsibility and artificial intelligence.Mark Coeckelbergh - 2023 - AI and Society 38 (6):2437-2450.
    Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot safety regulation regimes. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartek Chomanski - 2023 - Philosophy and Technology 36.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Decentring the discoverer: how AI helps us rethink scientific discovery.Elinor Clark & Donal Khosrowi - 2022 - Synthese 200 (6):1-26.
    This paper investigates how intuitions about scientific discovery using artificial intelligence can be used to improve our understanding of scientific discovery more generally. Traditional accounts of discovery have been agent-centred: they place emphasis on identifying a specific agent who is responsible for conducting all, or at least the important part, of a discovery process. We argue that these accounts experience difficulties capturing scientific discovery involving AI and that similar issues arise for human discovery. We propose an alternative, collective-centred view as (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Vicarious liability: a solution to a problem of AI responsibility?Matteo Pascucci & Daniela Glavaničová - 2022 - Ethics and Information Technology 24 (3):1-11.
    Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Algorithmic Accountability In the Making.Deborah G. Johnson - 2021 - Social Philosophy and Policy 38 (2):111-127.
    Algorithms are now routinely used in decision-making; they are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although use of algorithms has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains where safety and fairness are important. Awareness of these problems has generated public discourse calling for algorithmic accountability. However, the current discourse focuses largely on algorithms and their opacity. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2021 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Distributed responsibility in human–machine interactions.Anna Strasser - 2021 - AI and Ethics.
    Artificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of artificial agents) and raises (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations