Switch to: References

Add citations

You must login to add citations.
  1. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous killer robots are probably good news.Vincent C. Müller - 2016 - In Ezio Di Nucci & Filippo Santonio de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. London: Ashgate. pp. 67-81.
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Engaging the Public in the Ethics of Robots for War and Peace.Peter Danielson - 2011 - Philosophy and Technology 24 (3):239-249.
    Emerging technologies like robotics for war and peace stress our moral norms and generate much public interest and controversy. We use this interest to attract participants to an innovative on-line survey platform, designed for experimenting with public engagement in the ethics of technology. In particular, the N-Reasons platform addresses several issues in democratic ethics: the cost of public participation, the methodological issue of feasible reflective ethical equilibrium (how can individuals in a large group, take into account the ethical views of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Battlefield to Newsroom: Ethical Implications of Drone Technology in Journalism.Kathleen Bartzen Culver - 2014 - Journal of Mass Media Ethics 29 (1):52-64.
    Unmanned Aerial Vehicles, commonly known as “drones,” are a military technology now being developed for civilian and commercial use in the United States. With the federal government moving to develop rules for these uses in U.S. airspace by 2015, technologists, researchers, and news organizations are considering application of drone technology for reporting and data gathering. UAVs offer an inexpensive way to put cameras and sensors in the air to capture images and data but also pose serious concerns about safety, privacy, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automatic decision-making and reliability in robotic systems: some implications in the case of robot weapons.Roberto Cordeschi - 2013 - AI and Society 28 (4):431-441.
    In this article, I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology that has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.Jason Borenstein & Ron Arkin - 2016 - Science and Engineering Ethics 22 (1):31-46.
    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to “nudge” their human users in the direction of being “more ethical”. More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture “socially just” tendencies in their human counterparts. Designing technological artifacts in such a (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Action Type Deontic Logic.Martin Mose Bentzen - 2014 - Journal of Logic, Language and Information 23 (4):397-414.
    A new deontic logic, Action Type Deontic Logic, is presented. To motivate this logic, a number of benchmark cases are shown, representing inferences a deontic logic should validate. Some of the benchmark cases are singled out for further comments and some formal approaches to deontic reasoning are evaluated with respect to the benchmark cases. After that follows an informal introduction to the ideas behind the formal semantics, focussing on the distinction between action types and action tokens. Then the syntax and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Government of Evil Machines: an Application of Romano Guardini’s Thought on Technology.Enrico Beltramini - 2021 - Scientia et Fides 9 (1):257-281.
    In this article I propose a theological reflection on the philosophical assumptions behind the idea that intelligent machine can be governed through ethical protocols, which may apply either to the people who develop the machines or to the machines themselves, or both. This idea is particularly relevant in the case of machines’ extreme wrongdoing, a wrongdoing that becomes an existential risk for humankind. I call this extreme wrong-doing, ‘evil.’ Thus, this article is a theological account on the philosophical assumptions behind (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On How to Build a Moral Machine.Paul Bello & Selmer Bringsjord - 2013 - Topoi 32 (2):251-266.
    Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Evil and roboethics in management studies.Enrico Beltramini - 2019 - AI and Society 34 (4):921-929.
    In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
    Download  
     
    Export citation  
     
    Bookmark  
  • Armed military robots: editorial.Jürgen Altmann, Peter Asaro, Noel Sharkey & Robert Sparrow - 2013 - Ethics and Information Technology 15 (2):73-76.
    Arming uninhabited vehicles is an increasing trend. Widespread deployment can bring dangers for arms-control agreements and international humanitarian law. Armed UVs can destabilise the situation between potential opponents. Smaller systems can be used for terrorism. Using a systematic definition existing international regulation of armed UVs in the fields of arms control, export control and transparency measures is reviewed; these partly include armed UVs, but leave large gaps. For preventive arms control a general prohibition of armed UVs would be best. If (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Arms control for armed uninhabited vehicles: an ethical issue.Jürgen Altmann - 2013 - Ethics and Information Technology 15 (2):137-152.
    Arming uninhabited vehicles (UVs) is an increasing trend. Widespread deployment can bring dangers for arms-control agreements and international humanitarian law (IHL). Armed UVs can destabilise the situation between potential opponents. Smaller systems can be used for terrorism. Using a systematic definition existing international regulation of armed UVs in the fields of arms control, export control and transparency measures is reviewed; these partly include armed UVs, but leave large gaps. For preventive arms control a general prohibition of armed UVs would be (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Information technology and moral values.John Sullins - forthcoming - Stanford Encyclopedia of Philosophy.
    A encyclopedia entry on the moral impacts that happen when information technologies are used to record, communicate and organize information. including the moral challenges of information technology, specific moral and cultural challenges such as online games, virtual worlds, malware, the technology transparency paradox, ethical issues in AI and robotics, and the acceleration of change in technologies. It concludes with a look at information technology as a model for moral change, moral systems and moral agents.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Robots as Weapons in Just Wars.Marcus Schulzke - 2011 - Philosophy and Technology 24 (3):293-306.
    This essay analyzes the use of military robots in terms of the jus in bello concepts of discrimination and proportionality. It argues that while robots may make mistakes, they do not suffer from most of the impairments that interfere with human judgment on the battlefield. Although robots are imperfect weapons, they can exercise as much restraint as human soldiers, if not more. Robots can be used in a way that is consistent with just war theory when they are programmed to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Is Collective Agency a Coherent Idea? Considerations from the Enactive Theory of Agency.Mog Stapleton & Tom Froese - 1st ed. 2015 - In Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag. pp. 219-236.
    Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Dealing With Ethical Conflicts In Autonomous Agents And Multi-Agent Systems.Aline Belloni, Alain Berger, Olivier Boissier, Grégory Bonnet, Gauvain Bourgne, Pierre Antoine Chardel, Jean-Pierre Cotton, Nicolas Evreux, Jean-Gabriel Ganascia, Philippe Jaillon, Bruno Mermet, Gauthier Picard, Bernard Rever, Gaële Simon, Thibault De Swarte, Catherine Tessier, François Vexler, Robert Voyer & Antoine Zimmermann - unknown
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Towards A Framework To Deal With Ethical Conflicts In Autonomous Agents And Multi - Agent Systems.Aline Belloni, Alain Berger, Vincent Besson, Olivier Boissier, Grégory Bonnet, Gauvain Bourgne, Pierre Antoine Chardel, Jean-Pierre Cotton, Nicolas Evreux, Jean-Gabriel Ganascia, Philippe Jaillon, Bruno Mermet, Gauthier Picard, Bernard Reber, Gaële Simon, Thibault De Swarte, Catherine Tessier, François Vexler, Robert Voyer & Antoine Zimmermann - unknown
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Disengagement with ethics in robotics as a tacit form of dehumanisation.Karolina Zawieska - 2020 - AI and Society 35 (4):869-883.
    Over the past two decades, ethical challenges related to robotics technologies have gained increasing interest among different research and non-academic communities, in particular through the field of roboethics. While the reasons to address roboethics are clear, why not to engage with ethics needs to be better understood. This paper focuses on a limited or lacking engagement with ethics that takes place within some parts of the robotics community and its implications for the conceptualisation of the human being. The underlying assumption (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AWS compliance with the ethical principle of proportionality: three possible solutions.Maciek Zając - 2023 - Ethics and Information Technology 25 (1):1-13.
    The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Implications and consequences of robots with biological brains.Kevin Warwick - 2010 - Ethics and Information Technology 12 (3):223-234.
    In this paper a look is taken at the relatively new area of culturing neural tissue and embodying it in a mobile robot platform—essentially giving a robot a biological brain. Present technology and practice is discussed. New trends and the potential effects of and in this area are also indicated. This has a potential major impact with regard to society and ethical issues and hence some initial observations are made. Some initial issues are also considered with regard to the potential (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Framing robot arms control.Wendell Wallach & Colin Allen - 2013 - Ethics and Information Technology 15 (2):125-135.
    The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Advocating an ethical memory model for artificial companions from a human-centred perspective.Patricia A. Vargas, Ylva Fernaeus, Mei Yii Lim, Sibylle Enz, Wan Chin Ho, Mattias Jacobsson & Ruth Ayllet - 2011 - AI and Society 26 (4):329-337.
    This paper considers the ethical implications of applying three major ethical theories to the memory structure of an artificial companion that might have different embodiments such as a physical robot or a graphical character on a hand-held device. We start by proposing an ethical memory model and then make use of an action-centric framework to evaluate its ethical implications. The case that we discuss is that of digital artefacts that autonomously record and store user data, where this data are used (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Drones in humanitarian contexts, robot ethics, and the human–robot interaction.Aimee van Wynsberghe & Tina Comes - 2020 - Ethics and Information Technology 22 (1):43-53.
    There are two dominant trends in the humanitarian care of 2019: the ‘technologizing of care’ and the centrality of the humanitarian principles. The concern, however, is that these two trends may conflict with one another. Faced with the growing use of drones in the humanitarian space there is need for ethical reflection to understand if this technology undermines humanitarian care. In the humanitarian space, few agree over the value of drone deployment; one school of thought believes drones can provide a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism.Steve Torrance - 2014 - Philosophy and Technology 27 (1):9-29.
    I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties in (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Robowarfare: Can robots be more ethical than humans on the battlefield? [REVIEW]John P. Sullins - 2010 - Ethics and Information Technology 12 (3):263-275.
    Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • When stigmatization does not work: over-securitization in efforts of the Campaign to Stop Killer Robots.Anzhelika Solovyeva & Nik Hynek - 2023 - AI and Society 38 (6):2547-2569.
    This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Autonomous weapons systems, killer robots and human dignity.Amanda Sharkey - 2019 - Ethics and Information Technology 21 (2):75-87.
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Günther Anders in Silicon Valley: Artificial intelligence and moral atrophy.Elke Schwarz - 2019 - Thesis Eleven 153 (1):94-112.
    Artificial Intelligence as a buzzword and a technological development is presently cast as the ultimate ‘game changer’ for economy and society; a technology of which we cannot be the master, but which nonetheless will have a pervasive influence on human life. The fast pace with which the multi-billion dollar AI industry advances toward the creation of human-level intelligence is accompanied by an increasingly exaggerated chorus of the ‘incredible miracle’, or the ‘incredible horror’, intelligent machines will constitute for humanity, as the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Smart soldiers: towards a more ethical warfare.Femi Richard Omotoyinbo - 2023 - AI and Society 38 (4):1485-1491.
    It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity, for example, is higher in contrast tosmart soldierswho are emotionally inept. Hence, this paper examines the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. The approach is to develop and employ smart soldiers that are enhanced with ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.
    The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • “An Eye Turned into a Weapon”: a Philosophical Investigation of Remote Controlled, Automated, and Autonomous Drone Warfare.Oliver Müller - 2020 - Philosophy and Technology 34 (4):875-896.
    Military drones combine surveillance technology with missile equipment in a far-reaching way. In this article, I argue that military drones could and should be object for a philosophical investigation, referring in particular on Chamayou’s theory of the drone, who also coined the term “an eye turned into a weapon.” Focusing on issues of human self-understanding, agency, and alterity, I examine the intricate human-technology relations in the context of designing and deploying military drones. For that purpose, I am drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations