Switch to: Citations

Add references

You must login to add references.
  1. Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • (1 other version)Trusting artificial intelligence in cybersecurity is a double-edged sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Trusting Digital Technologies Correctly.Mariarosaria Taddeo - 2017 - Minds and Machines 27 (4):565-568.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Three Ethical Challenges of Applications of Artificial Intelligence in Cybersecurity.Mariarosaria Taddeo - 2019 - Minds and Machines 29 (2):187-191.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Ethical Principles for Artificial Intelligence in National Defence.Mariarosaria Taddeo, David McNeish, Alexander Blanchard & Elizabeth Edgar - 2021 - Philosophy and Technology 34 (4):1707-1729.
    Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Autonomous weapon systems and jus ad bellum.Alexander Blanchard & Mariarosaria Taddeo - forthcoming - AI and Society:1-7.
    In this article, we focus on the scholarly and policy debate on autonomous weapon systems and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war or by providing a propagandistic value. We argue that whilst these objections offer pressing concerns in their own right, they suffer (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   223 citations  
  • The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation.Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi - 2021 - AI and Society 36 (1):59–⁠77.
    In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.
    Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces? roadmaps since 2004. The idea is to have a staged move from ?man-in-the-loop? to ?man-on-the-loop? to full autonomy. While this may result in considerable military advantages, the policy raises ethical concerns with regard to potential breaches of International Humanitarian Law, including the Principle of Distinction and the Principle of Proportionality. Current applications of remote piloted robot planes or drones offer (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Autonomous weapons systems, killer robots and human dignity.Amanda Sharkey - 2019 - Ethics and Information Technology 21 (2):75-87.
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
    The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The fourth revolution: how the infosphere is reshaping human reality.Luciano Floridi - 2014 - Oxford University Press UK.
    Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies is changing the answer to these fundamental human questions. As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed (...)
    Download  
     
    Export citation  
     
    Bookmark   104 citations  
  • The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
    While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • What is computer ethics?James H. Moor - 1985 - Metaphilosophy 16 (4):266-275.
    Download  
     
    Export citation  
     
    Bookmark   149 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • From Jus ad Bellum to Jus ad Vim: Recalibrating Our Understanding of the Moral Use of Force.Daniel Brunstetter & Megan Braun - 2013 - Ethics and International Affairs 27 (1):87-106.
    In the preface of the 2006 edition ofJust and Unjust Wars, Michael Walzer makes an important distinction between, on the one hand, “measures short of war,” such as imposing no-fly zones, pinpoint air/missile strikes, and CIA operations, and on the other, “actual warfare,” typified by a ground invasion or a large-scale bombing campaign. Even if the former are, technically speaking, acts of war according to international law, he proffers that “it is common sense to recognize that they are very different (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • (1 other version)Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems.Alexander Blanchard & Mariarosaria Taddeo - 2022 - Journal of Military Ethics 21 (3):286-303.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations