Results for 'Lethal Autonomous Weapons'

893 found
Order:
  1. Lethal Autonomous Weapons: Designing War Machines with Values.Steven Umbrello - 2019 - Delphi: Interdisciplinary Review of Emerging Technologies 1 (2):30-34.
    Lethal Autonomous Weapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  2. The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  3. The Soldier’s Share: Considering Narrow Responsibility for Lethal Autonomous Weapons.Kevin Schieman - 2023 - Journal of Military Ethics (3):228-245.
    Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. The Soldier's Share: Considering Narrow Proportionality for Lethal Autonomous Weapons.Kevin Schieman - 2023 - Journal of Military Ethics.
    Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. No Right To Mercy - Making Sense of Arguments From Dignity in the Lethal Autonomous Weapons Debate.Maciej Zając - 2020 - Etyka 59 (1):134-55.
    Arguments from human dignity feature prominently in the Lethal Autonomous Weapons moral feasibility debate, even though their exists considerable controversy over their role and soundness and the notion of dignity remains under-defined. Drawing on the work of Dieter Birnbacher, I fix the sub-discourse as referring to the essential value of human persons in general, and to postulated moral rights of combatants not covered within the existing paradigm of the International Humanitarian Law in particular. I then review and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Towards broadening the perspective on lethal autonomous weapon systems ethics and regulations.Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho - 2020 - In Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho (eds.), Rio Seminar on Autonomous Weapons Systems. Brasília: Alexandre de Gusmão Foundation. pp. 133-158.
    Our reflections on LAWS issues are the result of the work of our research group on AI and ethics at the Informatics Center in partnership with the Information Science Department, both from the Federal University of Pernambuco, Brazil. In particular, our propositions and provocations are tied to Bianca Ximenes’s ongoing doctoral thesis, advised by Prof. Geber Ramalho, from the area of computer science, and co-advised by Prof. Diego Salcedo, from the humanities. Our research group is interested in answering two tricky (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. Could slaughterbots wipe out humanity? Assessment of the global catastrophic risk posed by autonomous weapons.Alexey Turchin - manuscript
    Recently criticisms against autonomous weapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomous weapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Coupling levels of abstraction in understanding meaningful human control of autonomous weapons: a two-tiered approach.Steven Umbrello - 2021 - Ethics and Information Technology 23 (3):455-464.
    The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. "Jewish Law, Techno-Ethics, and Autonomous Weapon Systems: Ethical-Halakhic Perspectives".Nadav S. Berman - 2020 - Jewish Law Association Studies 29:91-124.
    Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous Weapon Systems (AWS), defensive and offensive (the article deals only with the latter). Such AI-operated lethal machines of various forms (aerial, marine, continental) raise substantial ethical concerns. Interestingly, the topic of AWS was almost not treated in Jewish law and its research. This article thus proposes an introductory ethical-halakhic perspective (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, Openphd (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
    My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. Even with the complexity of Lethal Autonomy issues there is nothing particular to gain from being a low-tech Military. Lethal (...) weapons are therefore independently capable of exhibiting positive or negative recessive norms of targeting in its perceptions of Discrimination between Civilian and Military Objects, Proportionality of Methods and Outcomes, Feasible Precaution before interdiction and its underlying Concepts of Humanity. Additionally Lethal Autonomy in Human-interacting Autonomous Robots is ubiquitous[designed and/or recessive]. This marks the completion of an Open PhD ( #openphd ) project done in sui generis form. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  14. Autonomous killer robots are probably good news.Vincent C. Müller - 2016 - In Ezio Di Nucci & Filippo Santoni de Sio (eds.), Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons. Routledge. pp. 67-81.
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  15. Just War and Robots’ Killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  16. The Problem with Killer Robots.Nathan Gabriel Wood - 2020 - Journal of Military Ethics 19 (3):220-240.
    Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Killer robots: Regulate, don’t ban.Vincent C. Müller & Thomas W. Simpson - 2014 - In Vincent C. Müller & Thomas W. Simpson (eds.), Killer robots: Regulate, don’t ban. Blavatnik School of Government. pp. 1-4.
    Lethal Autonomous Weapon Systems are here. Technological development will see them become widespread in the near future. This is in a matter of years rather than decades. When the UN Convention on Certain Conventional Weapons meets on 10-14th November 2014, well-considered guidance for a decision on the general policy direction for LAWS is clearly needed. While there is widespread opposition to LAWS—or ‘killer robots’, as they are popularly called—and a growing campaign advocates banning them outright, we argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Designed for Death: Controlling Killer Robots.Steven Umbrello - 2022 - Budapest: Trivent Publishing.
    Autonomous weapons systems, often referred to as ‘killer robots’, have been a hallmark of popular imagination for decades. However, with the inexorable advance of artificial intelligence systems (AI) and robotics, killer robots are quickly becoming a reality. These lethal technologies can learn, adapt, and potentially make life and death decisions on the battlefield with little-to-no human involvement. This naturally leads to not only legal but ethical concerns as to whether we can meaningful control such machines, and if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Kantian Ethics in the Age of Artificial Intelligence and Robotics.Ozlem Ulgen - 2017 - Questions of International Law 1 (43):59-83.
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (eg unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (eg lethal autonomous weapons killing human com- batants). Machine-mediated human interaction challenges the philosophical basis of human (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law.Duncan MacIntosh - 2016 - Temple International and Comparative Law Journal 30 (1):99-117.
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status.Steven Umbrello & Nathan Gabriel Wood - 2021 - Information 12 (5):216.
    Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving evermore attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected with emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. Autonomous Weapon Systems, Asymmetrical Warfare, and Myth.Michal Klincewicz - 2018 - Civitas. Studia Z Filozofii Polityki 23:179-195.
    Predictions about autonomous weapon systems are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrifi c consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Autonomous Weapons Systems, the Frame Problem and Computer Security.Michał Klincewicz - 2015 - Journal of Military Ethics 14 (2):162-176.
    Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  24. A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartek Chomanski - 2023 - Philosophy and Technology 36.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Autonomous Weapon Systems in Just War Theory perspective. Maciej - 2022 - Dissertation,
    Please contact me at [email protected] if you are interested in reading a particular chapter or being sent the entire manuscript for private use. -/- The thesis offers a comprehensive argument in favor of a regulationist approach to autonomous weapon systems (AWS). AWS, defined as all military robots capable of selecting or engaging targets without direct human involvement, are an emerging and potentially deeply transformative military technology subject to very substantial ethical controversy. AWS have both their enthusiasts and their detractors, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The Unfounded Bias Against Autonomous Weapons Systems.Áron Dombrovszki - 2021 - Információs Társadalom 21 (2):13–28.
    Autonomous Weapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Arguments for Banning Autonomous Weapon Systems: A Critique.Hunter B. Cantrell - 2019 - Dissertation, Georgia State University
    Autonomous Weapon Systems (AWS) are the next logical advancement for military technology. There is a significant concern though that by allowing such systems on the battlefield, we are collectively abdicating our moral responsibility. In this thesis, I will examine two arguments that advocate for a total ban on the use of AWS. I call these arguments the “Responsibility” and the “Agency” arguments. After presenting these arguments, I provide my own objections and demonstrate why these arguments fail to convince. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  53
    Robot warfare: the (im)permissibility of autonomous weapons systems.Jack Madock - 2024 - AI and Ethics 1.
    This paper argues against prominent views of the impermissibility of autonomous weapons systems (AWS). It does so by assuming each theory is true and arguing towards contradiction. To arrive at a contradiction two assumptions are necessary. First, the theory of impermissibility in question is assumed. Second, a thought experiment called the ideal warfare scenario is assumed. The paper aims to demonstrate that in theory AWS could be deployed such that they bring about the best of possible warfare. However, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems.Steven Umbrello - 2021 - Dissertation, Consortium Fino
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. A Risk-Based Regulatory Approach to Autonomous Weapon Systems.Alexander Blanchard, Claudio Novelli, Luciano Floridi & Mariarosaria Taddeo - manuscript
    International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and establish risk tolerance thresholds through a risk matrix informed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender.Robert Sparrow - 2015 - International Law Studies 91:699-728.
    Would it be ethical to deploy autonomous weapon systems (AWS) if they were unable to reliably recognize when enemy forces had surrendered? I suggest that an inability to reliably recognize surrender would not prohibit the ethical deployment of AWS where there was a limited window of opportunity for targets to surrender between the launch of the AWS and its impact. However, the operations of AWS with a high degree of autonomy and/or long periods of time between release and impact (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Burden of Proof in the Autonomous Weapons Debate.Maciek Zając - 2024 - Ethics and Armed Forces 2024 (1):34-42.
    The debate on the ethical permissibility of autonomous weapon systems (AWS) is deadlocked. It could therefore benefit from a differentiated assignment of the burden of proof. This is because the discussion is not purely philosophical in nature, but has a legal and security policy component and aims to avoid the most harmful outcomes of an otherwise unchecked development. Opponents of a universal AWS ban must clearly demonstrate that AWS comply with the Law of Armed Conflict (LOAC). This requires extensive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. The Automation of Authority: Discrepancies with Jus Ad Bellum Principles.Donovan Phillips - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 159-172.
    This chapter considers how the adoption of autonomous weapons systems (AWS) may affect jus ad bellum principles of warfare. In particular, it focuses on the use of AWS in non-international armed conflicts (NIAC). Given the proliferation of NIAC, the development and use of AWS will most likely be attuned to this specific theater of war. As warfare waged by modernized liberal democracies (those most likely to develop and employ AWS at present) increasingly moves toward a model of individualized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Preventing another Mosul Unmanned Weapon Platforms as the Solution to the Tragedy of a Hostage Siege. Maciej - 2022 - In Dragan Stanar and Kristina Tonn (ed.), The Ethics of Urban Warfare City and War. pp. 153-171.
    The 2016-17 Iraqi offensive that recaptured the city of Mosul from the Islamic State have demonstrated the inability of contemporary armed forces to retake urban areas from a determined and ruthless enemy without either suffering debilitating casualties or causing thousands of civilian deaths and virtually destroying the city itself. The enemy’s willingness to refuse civilian evacuation via a humanitarian corridor and effectively take the inhabitants hostage is all it takes to impose this tragic dilemma on an attacking force. The civilian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications.Markus Christen, Thomas Burri, Joseph O. Chapa, Raphael Salvi, Filippo Santoni de Sio & John P. Sullins - 2017 - University of Zurich Digital Society Initiative White Paper Series, No. 1.
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Reasons to Punish Autonomous Robots.Zac Cogley - 2023 - The Gradient 14.
    I here consider the reasonableness of punishing future autonomous military robots. I argue that it is an engineering desideratum that these devices be responsive to moral considerations as well as human criticism and blame. Additionally, I argue that someday it will be possible to build such machines. I use these claims to respond to the no subject of punishment objection to deploying autonomous military robots, the worry being that an “accountability gap” could result if the robot committed a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Lethal Military Robots: Who is Responsible When Things Go Wrong?Peter Olsthoorn - 2018 - In Rocci Luppicini (ed.), The Changing Scope of Technoethics in Contemporary Society. Hershey, PA: IGI Global. pp. 106-123.
    Although most unmanned systems that militaries use today are still unarmed and predominantly used for surveillance, it is especially the proliferation of armed military robots that raises some serious ethical questions. One of the most pressing concerns the moral responsibility in case a military robot uses violence in a way that would normally qualify as a war crime. In this chapter, the authors critically assess the chain of responsibility with respect to the deployment of both semi-autonomous and (learning) (...) lethal military robots. They start by looking at military commanders because they are the ones with whom responsibility normally lies. The authors argue that this is typically still the case when lethal robots kill wrongly – even if these robots act autonomously. Nonetheless, they next look into the possible moral responsibility of the actors at the beginning and the end of the causal chain: those who design and manufacture armed military robots, and those who, far from the battlefield, remotely control them. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Predators or Ploughshares? Arms Control of Robotic Weapons.Robert Sparrow - 2009 - IEEE Technology and Society 28 (1):25-29.
    This paper makes the case for arms control regimes to govern the development and deployment of autonomous weapon systems and long range uninhabited aerial vehicles.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  40. Beyond Deadlock: Low Hanging Fruit and Strict yet Available Options in AWS Regulation.Maciej Zając - 2022 - Journal of Ethics and Emerging Technologies 2 (32):1-14.
    Efforts to ban Autonomous Weapon Systems were both unsuccessful and controversial. Simultaneously the need to address the detrimental aspects of AWS development and proliferation continues to grow in scope and urgency. The article presents several regulatory solutions capable of addressing the issue while simultaneously respecting the requirements of military necessity and so attracting a broad consensus. Two much stricter solutions – regional AWS bans and adoption of a no first use policy – are also presented as fallback strategies in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. AWS compliance with the ethical principle of proportionality: three possible solutions.Maciek Zając - 2023 - Ethics and Information Technology 25 (1):1-13.
    The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. How virtue signalling makes us better: moral preferences with respect to autonomous vehicle type choices.Robin Kopecky, Michaela Jirout Košová, Daniel D. Novotný, Jaroslav Flegr & David Černý - 2023 - AI and Society 38 (2):937-946.
    One of the moral questions concerning autonomous vehicles (henceforth AVs) is the choice between types that differ in their built-in algorithms for dealing with rare situations of unavoidable lethal collision. It does not appear to be possible to avoid questions about how these algorithms should be designed. We present the results of our study of moral preferences (N = 2769) with respect to three types of AVs: (1) selfish, which protects the lives of passenger(s) over any number of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  44. A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Hunting For Humans: On Slavery as the Basis of the Emergence of the US as the World’s First Super Industrial State or Technocracy and its Deployment of Cutting-Edge Computing/Artificial Intelligence Technologies, Predictive Analytics, and Drones towards the Repression of Dissent.Miron Clay-Gilmore - manuscript
    This essay argues that Huey Newton’s philosophical explanation of US empire fills an epistemological gap in our thinking that provides us with a basis for understanding the emergence and operational application of predictive policing, Big Data, cutting-edge surveillance programs, and semi-autonomous weapons by US military and policing apparati to maintain control over racialized populations historically and in the (still ongoing) Global War on Terror today – a phenomenon that Black Studies scholars and Black philosophers alike have yet to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence, Robots, and Philosophy.Masahiro Morioka, Shin-Ichiro Inaba, Makoto Kureha, István Zoltán Zárdai, Minao Kukita, Shimpei Okamoto, Yuko Murakami & Rossa Ó Muireartaigh - 2023 - Journal of Philosophy of Life.
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Should we campaign against sex robots?John Danaher, Brian D. Earp & Anders Sandberg - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press.
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  49. Defensive Killing By Police: Analyzing Uncertain Threat Scenarios.Jennifer M. Https://Orcidorg Page - 2023 - Journal of Ethics and Social Philosophy 24 (3):315-351.
    In the United States, police use of force experts often maintain that controversial police shootings where an unarmed person’s hand gesture was interpreted as their “going for a gun” are justifiable. If an officer waits to confirm that a weapon is indeed being pulled from a jacket pocket or waistband, it may be too late to defend against a lethal attack. This article examines police policy norms for self-defense against “uncertain threats” in three contexts: (1) known in-progress violent crimes, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The World Crisis - And What To Do About It: A Revolution for Thought and Action.Nicholas Maxwell - 2021 - New Jersey: World Scientific.
    Two great problems of learning confront humanity: learning about the universe, and about ourselves and other living things as a part of the universe; and learning how to create a good, civilized, enlightened, wise world. We have solved the first great problem of learning – we did that when we created modern science and technology in the 17th century. But we have not yet solved the second one. That combination of solving the first problem, failing to solve the second one, (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 893