Switch to: References

Add citations

You must login to add citations.
  1. The Implications of Drones on the Just War Tradition.Daniel Brunstetter & Megan Braun - 2011 - Ethics and International Affairs 25 (3):337-358.
    The aim of this article is to explore how the brief history of drone warfare thus far affects and potentially alters the parameters of ad bellum and in bello just war principles.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Autonomous Weapon Systems: A Clarification.Nathan Gabriel Wood - 2023 - Journal of Military Ethics 22 (1):18-32.
    Due to advances in military technology, there has been an outpouring of research on what are known as autonomous weapon systems (AWS). However, it is common in this literature for arguments to be made without first making clear exactly what definitions one is employing, with the detrimental effect that authors may speak past one another or even miss the targets of their arguments. In this article I examine the U.S. Department of Defense and International Committee of the Red Cross definitions (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The case against robotic warfare: A response to Arkin.Ryan Tonkens - 2012 - Journal of Military Ethics 11 (2):149-168.
    Abstract Semi-autonomous robotic weapons are already carving out a role for themselves in modern warfare. Recently, Ronald Arkin has argued that autonomous lethal robotic systems could be more ethical than humans on the battlefield, and that this marks a significant reason in favour of their development and use. Here I offer a critical response to the position advanced by Arkin. Although I am sympathetic to the spirit of the motivation behind Arkin's project and agree that if we decide to develop (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Contemporary Technologies and the Morality of Warfare: The War of the Machines.Brian Smith - 2022 - Journal of Military Ethics 21 (1):88-92.
    The belief that automated technologies will have a salutary effect on war goes back to the late nineteenth century. In 1898, at Madison Square Garden, Nikola Tesla famously showcased the first radi...
    Download  
     
    Export citation  
     
    Bookmark  
  • Just war and robots’ killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Günther Anders in Silicon Valley: Artificial intelligence and moral atrophy.Elke Schwarz - 2019 - Thesis Eleven 153 (1):94-112.
    Artificial Intelligence as a buzzword and a technological development is presently cast as the ultimate ‘game changer’ for economy and society; a technology of which we cannot be the master, but which nonetheless will have a pervasive influence on human life. The fast pace with which the multi-billion dollar AI industry advances toward the creation of human-level intelligence is accompanied by an increasingly exaggerated chorus of the ‘incredible miracle’, or the ‘incredible horror’, intelligent machines will constitute for humanity, as the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Societal and ethical issues of digitization.Lambèr Royakkers, Jelte Timmer, Linda Kool & Rinie van Est - 2018 - Ethics and Information Technology 20 (2):127-142.
    In this paper we discuss the social and ethical issues that arise as a result of digitization based on six dominant technologies: Internet of Things, robotics, biometrics, persuasive technology, virtual & augmented reality, and digital platforms. We highlight the many developments in the digitizing society that appear to be at odds with six recurring themes revealing from our analysis of the scientific literature on the dominant technologies: privacy, autonomy, security, human dignity, justice, and balance of power. This study shows that (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • The Moral Case for the Development and Use of Autonomous Weapon Systems.Erich Riesen - 2022 - Journal of Military Ethics 21 (2):132-150.
    Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of crashes with self‐driving cars: A roadmap, II.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12506.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic.Sven Nyholm & Jilles Smids - 2020 - Ethics and Information Technology 22 (4):335-344.
    In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Decision Making in Killer Robots Is Not Bias Free.Teresa Limata - 2023 - Journal of Military Ethics 22 (2):118-128.
    Autonomous weapons are systems that, once activated, can identify, select and engage targets by themselves. Scharre (2018. Army of None: Autonomous Weapons and the Future of War. New York: Norton) has given a definition of autonomy based on three dimensions: the automatized tasks, the relationship with the human user and the sophistication of the machine’s decision-making process. Based on this definition of autonomy, this article provides an overview of systematic biases that may occur in each of these three dimensions. Before (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Autonomous Weapons Systems, the Frame Problem and Computer Security.Michał Klincewicz - 2015 - Journal of Military Ethics 14 (2):162-176.
    Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make AWS critically (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Role of emotions in responsible military AI.José Kerstholt, Mark Neerincx, Karel van den Bosch, Jason S. Metcalfe & Jurriaan van Diggelen - 2023 - Ethics and Information Technology 25 (1):1-4.
    Download  
     
    Export citation  
     
    Bookmark  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
    While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Cognitive Nonconscious: Enlarging the Mind of the Humanities.N. Katherine Hayles - 2016 - Critical Inquiry 42 (4):783-808.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Cognitive Assemblages: Technical Agency and Human Interactions.N. Katherine Hayles - 2016 - Critical Inquiry 43 (1):32-55.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Problem with Killer Robots.Nathan Gabriel Wood - 2020 - Journal of Military Ethics 19 (3):220-240.
    Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Drone Killings in Principle and in Practice.Morten Dige - 2017 - Ethical Theory and Moral Practice 20 (4):873-883.
    It is a widely accepted claim that whether a given technology is being justly used in the real world is a separate question from moral issues intrinsic to technology. We should not blame the technology itself for immoral ways it happens to be used. There is obviously some truth to that. But I want to argue that what we see in the real world cases of drone killings is not merely an accidental or contingent use of drone technology. The real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and humanitarian obligations.David Danks & Daniel Trusilo - 2023 - Ethics and Information Technology 25 (1):1-5.
    Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civilian harm. Grounded in these two assertions we focus on the transformative potential that AI ISR systems have for improving the respect for and protection of humanitarian relief operations. Specifically, we propose that establishing an interface for humanitarian organizations to military AI ISR systems can improve the current state of ad-hoc humanitarian notification systems, which are notoriously unreliable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Rethinking the Criterion for Assessing Cia-targeted Killings: Drones, Proportionality and Jus Ad Vim.Megan Braun & Daniel R. Brunstetter - 2013 - Journal of Military Ethics 12 (4):304-324.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare.Kieran M. Brayford - forthcoming - AI and Society:1-9.
    Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Value Sensitive Design for autonomous weapon systems – a primer.Christine Boshuijzen-van Burken - 2023 - Ethics and Information Technology 25 (1):1-14.
    Value Sensitive Design (VSD) is a design methodology developed by Batya Friedman and Peter Kahn (2003) that brings in moral deliberations in an early stage of a design process. It assumes that neither technology itself is value neutral, nor shifts the value-ladennes to the sole usage of technology. This paper adds to emerging literature onVSD for autonomous weapons systems development and discusses extant literature on values in autonomous systems development in general and in autonomous weapons development in particular. I identify (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems.Alexander Blanchard & Mariarosaria Taddeo - 2022 - Journal of Military Ethics 21 (3):286-303.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • “The Sort of War They Deserve”? The Ethics of Emerging Air Power and the Debate over Warbots.Benjamin R. Banta - 2018 - Journal of Military Ethics 17 (2):156-171.
    As new military technologies change the character of war by empowering agents in new ways, it can become more difficult for our ethics of war to achieve the right balance between moral principle and necessity. Indeed, there is an ever-growing literature that seeks to apply, defend and / or update the ethics of war in light of what is often claimed to be an unprecedented period of rapid advancement in military robotics, or warbots. To increase confidence that our approach to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications.Markus Christen, Thomas Burri, Joseph O. Chapa, Raphael Salvi, Filippo Santoni de Sio & John P. Sullins - 2017 - University of Zurich Digital Society Initiative White Paper Series, No. 1.
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender.Robert Sparrow - 2015 - International Law Studies 91:699-728.
    Would it be ethical to deploy autonomous weapon systems (AWS) if they were unable to reliably recognize when enemy forces had surrendered? I suggest that an inability to reliably recognize surrender would not prohibit the ethical deployment of AWS where there was a limited window of opportunity for targets to surrender between the launch of the AWS and its impact. However, the operations of AWS with a high degree of autonomy and/or long periods of time between release and impact are (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Download  
     
    Export citation  
     
    Bookmark