Switch to: Citations

Add references

You must login to add references.
  1. Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Automation and Utopia: Human Flourishing in an Age Without Work.John Danaher - 2019 - Cambridge, MA: Harvard University Press.
    Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Markets without Symbolic Limits.Jason Brennan & Peter Martin Jaworski - 2015 - Ethics 125 (4):1053-1077.
    Semiotic objections to commodification hold that buying and selling certain goods and services is wrong because of what market exchange communicates or because it violates the meaning of certain goods, services, and relationships. We argue that such objections fail. The meaning of markets and of money is a contingent, socially constructed fact. Cultures often impute meaning to markets in harmful, socially destructive, or costly ways. Rather than semiotic objections giving us reason to judge certain markets as immoral, the usefulness of (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Case for Ethical Autonomy in Unmanned Systems.Ronald C. Arkin - 2010 - Journal of Military Ethics 9 (4):332-341.
    The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human warfighters in today's battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Attributability, Answerability, and Accountability: Toward a Wider Theory of Moral Responsibility.David Shoemaker - 2011 - Ethics 121 (3):602-632.
    Recently T. M. Scanlon and others have advanced an ostensibly comprehensive theory of moral responsibility—a theory of both being responsible and being held responsible—that best accounts for our moral practices. I argue that both aspects of the Scanlonian theory fail this test. A truly comprehensive theory must incorporate and explain three distinct conceptions of responsibility—attributability, answerability, and accountability—and the Scanlonian view conflates the first two and ignores the importance of the third. To illustrate what a truly comprehensive theory might look (...)
    Download  
     
    Export citation  
     
    Bookmark   122 citations  
  • Responsible Artificial Intelligence: How to Develop and Use Ai in a Responsible Way.Virginia Dignum - 2019 - Springer Verlag.
    In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. Throughout (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Humans and Robots: Ethics, Agency, and Anthropomorphism.Sven Nyholm - 2020 - Rowman & Littlefield International.
    This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Two Faces of Responsibility.Gary Watson - 1996 - Philosophical Topics 24 (2):227-248.
    Download  
     
    Export citation  
     
    Bookmark   377 citations  
  • Editors’ Overview: Moral Responsibility in Technology and Engineering.Ibo van de Poel, Jessica Fahlquist, Neelke Doorn, Sjoerd Zwart & Lambèr Royakkers - 2012 - Science and Engineering Ethics 18 (1):1-11.
    In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many hands as the occurrence of a gap (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
    There has recently been increasing interest in the possibility and ethics of lethal autonomous weapons systems (LAWS), which would combine sophisticated AI with machinery capable of deadly force. One objection to LAWS is that their use will create a troubling responsibility gap, where no human agent can properly be held accountable for the outcomes that they create. While some authors have attempted to show that individual agents can, in fact, be responsible for the behaviour of LAWS in various circumstances, this (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.Bradley Jay Strawser - 2010 - Journal of Military Ethics 9 (4):342-368.
    A variety of ethical objections have been raised against the military employment of uninhabited aerial vehicles (UAVs, drones). Some of these objections are technological concerns over UAVs abilities’ to function on par with their inhabited counterparts. This paper sets such concerns aside and instead focuses on supposed objections to the use of UAVs in principle. I examine several such objections currently on offer and show them all to be wanting. Indeed, I argue that we have a duty to protect an (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   215 citations  
  • Responsibility as Answerability.Angela M. Smith - 2015 - Inquiry: An Interdisciplinary Journal of Philosophy 58 (2):99-126.
    ABSTRACTIt has recently become fashionable among those who write on questions of moral responsibility to distinguish two different concepts, or senses, of moral responsibility via the labels ‘responsibility as attributability’ and ‘responsibility as accountability’. Gary Watson was perhaps the first to introduce this distinction in his influential 1996 article ‘Two Faces of Responsibility’ , but it has since been taken up by many other philosophers. My aim in this study is to raise some questions and doubts about this distinction and (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • On Being Responsible and Holding Responsible.Angela M. Smith - 2007 - The Journal of Ethics 11 (4):465-484.
    A number of philosophers have recently argued that we should interpret the debate over moral responsibility as a debate over the conditions under which it would be “fair” to blame a person for her attitudes or conduct. What is distinctive about these accounts is that they begin with the stance of the moral judge, rather than that of the agent who is judged, and make attributions of responsibility dependent upon whether it would be fair or appropriate for a moral judge (...)
    Download  
     
    Export citation  
     
    Bookmark   136 citations  
  • Attributability, Answerability, and Accountability: In Defense of a Unified Account.Angela M. Smith - 2012 - Ethics 122 (3):575-589.
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • No Such Thing as Killer Robots.Michael Robillard - 2017 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Responsibility and the Negligence Standard.Joseph Raz - 2010 - Oxford Journal of Legal Studies 30 (1):1-18.
    The paper has dual aim: to analyse the structure of negligence, and to use it to offer an explanation of responsibility (for actions, omissions, consequences) in terms of the relations which must exist between the action (omission, etc.) and the agents powers of rational agency if the agent is responsible for the action. The discussion involves reflections on the relations between the law and the morality of negligence, the difference between negligence and strict liability, the role of excuses and the (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Just and Unjust Wars: A Moral Argument with Historical Illustrations.Barrie Paskins & Michael Walzer - 1981 - Philosophical Quarterly 31 (124):285.
    Download  
     
    Export citation  
     
    Bookmark   280 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   173 citations  
  • What's So Bad About Killer Robots?Alex Leveringhaus - 2018 - Journal of Applied Philosophy 35 (2):341-358.
    Robotic warfare has now become a real prospect. One issue that has generated heated debate concerns the development of ‘Killer Robots’. These are weapons that, once programmed, are capable of finding and engaging a target without supervision by a human operator. From a conceptual perspective, the debate on Killer Robots has been rather confused, not least because it is unclear how central elements of these weapons can be defined. Offering a precise take on the relevant conceptual issues, the article contends (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial intelligence and responsibility.Lode Lauwaert - 2021 - AI and Society 36 (3):1001-1009.
    In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both premises (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Debunking (the) Retribution (Gap).Steven R. Kraaijeveld - 2020 - Science and Engineering Ethics 26 (3):1315-1328.
    Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufciently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Expressivist Account of Punishment, Retribution, and the Emotions.Peter Königs - 2013 - Ethical Theory and Moral Practice 16 (5):1029-1047.
    This paper provides a discussion of the role that emotions may play in the justification of punishment. On the expressivist account of punishment, punishment has the purpose of expressing appropriate emotional reactions to wrongdoing, such as indignation, resentment or guilt. I will argue that this expressivist approach fails as these emotions can be expressed other than through the infliction of punishment. Another argument for hard treatment put forward by expressivists states that punitive sanctions are necessary in order for the law (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Problem with Negligence.Matt King - 2009 - Social Theory and Practice 35 (4):577-595.
    Ordinary morality judges agents blameworthy for negligently produced harms. In this paper I offer two main reasons for thinking that explaining just how negligent agents are responsible for the harms they produce is more problematic than one might think. First, I show that negligent conduct is characterized by the lack of conscious control over the harm, which conflicts with the ordinary view that responsibility for something requires at least some conscious control over it. Second, I argue that negligence is relevantly (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Robots and Respect: A Response to Robert Sparrow.Ryan Jenkins & Duncan Purves - 2016 - Ethics and International Affairs 30 (3):391-400.
    Robert Sparrow argues that several initially plausible arguments in favor of the deployment of autonomous weapons systems (AWS) in warfare fail, and that their deployment faces a serious moral objection: deploying AWS fails to express the respect for the casualties of war that morality requires. We critically discuss Sparrow’s argument from respect and respond on behalf of some objections he considers. Sparrow’s argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   141 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Shooting to Kill: The Ethics of Police and Military Use of Lethal Force.Seumas Miller - 2016 - New York: Oxford University Press USA.
    Terrorism, the use of military force in Afghanistan, Iraq and Syria, and the fatal police shootings of unarmed persons have all contributed to renewed interest in the ethics of police and military use of lethal force and its moral justification. In this book, philosopher Seumas Miller analyzes the various moral justifications and moral responsibilities involved in the use of lethal force by police and military combatants, relying on a distinctive normative teleological account of institutional roles. His conception constitutes a novel (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Soft ethics and the governance of the digital.Luciano Floridi - 2018 - Philosophy and Technology 31 (1):1-8.
    What is the relation between the ethics, the law, and the governance of the digital? In this article I articulate and defend what I consider the most reasonable answer.
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Soft ethics: its application to the General Data Protection Regulation and its dual advantage.Luciano Floridi - 2018 - Philosophy and Technology 31 (1):163-167.
    In previous works (Floridi 2018) I introduced the distinction between hard ethics (which may broadly be described as what is morally right and wrong independently of whether something is legal or illegal), and soft or post-compliance ethics (which focuses on what ought to be done over and above existing legislation). This paper analyses the applicability of soft ethics to the General Data Protection Regulation and advances the theory that soft ethics has a dual advantage—as both an opportunity strategy and a (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Just and Unjust Wars: A Moral Argument with Historical Illustrations.Michael Walzer - 1979 - Science and Society 43 (2):247-249.
    Download  
     
    Export citation  
     
    Bookmark   179 citations