Switch to: References

Citations of:

Killer robots

Journal of Applied Philosophy 24 (1):62–77 (2007)

Add citations

You must login to add citations.
  1. Robots as “Evil Means”? A Rejoinder to Jenkins and Purves.Robert Sparrow - 2016 - Ethics and International Affairs 30 (3):401-403.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sexual Rights, Disability and Sex Robots.Ezio Di Nucci - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press.
    I argue that the right to sexual satisfaction of severely physically and mentally disabled people and elderly people who suffer from neurodegenerative diseases can be fulfilled by deploying sex robots; this would enable us to satisfy the sexual needs of many who cannot provide for their own sexual satisfaction; without at the same time violating anybody’s right to sexual self-determination. I don’t offer a full-blown moral justification of deploying sex robots in such cases, as not all morally relevant concerns can (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Typology of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoretical, and Practical Posthumanisms.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 31-91.
    The term ‘posthumanism’ has been employed to describe a diverse array of phenomena ranging from academic disciplines and artistic movements to political advocacy campaigns and the development of commercial technologies. Such phenomena differ widely in their subject matter, purpose, and methodology, raising the question of whether it is possible to fashion a coherent definition of posthumanism that encompasses all phenomena thus labelled. In this text, we seek to bring greater clarity to this discussion by formulating a novel conceptual framework for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Organizational Posthumanism.Matthew E. Gladden - 2016 - In Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization. Defragmenter Media. pp. 93-131.
    Building on existing forms of critical, cultural, biopolitical, and sociopolitical posthumanism, in this text a new framework is developed for understanding and guiding the forces of technologization and posthumanization that are reshaping contemporary organizations. This ‘organizational posthumanism’ is an approach to analyzing, creating, and managing organizations that employs a post-dualistic and post-anthropocentric perspective and which recognizes that emerging technologies will increasingly transform the kinds of members, structures, systems, processes, physical and virtual spaces, and external ecosystems that are available for organizations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Drones, courage, and military culture.Robert Sparrow - 2015 - In Jr Lucas (ed.), Routledge Handbook of Military Ethics. London: Routledge. pp. 380-394.
    In so far as long-range tele-operated weapons, such as the United States’ Predator and Reaper drones, allow their operators to fight wars in what appears to be complete safety, thousands of kilometres removed from those whom they target and kill, it is unclear whether drone operators either require courage or have the opportunity to develop or exercise it. This chapter investigates the implications of the development of tele-operated warfare for the extent to which courage will remain central to the role (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Twenty seconds to comply: Autonomous weapon systems and the recognition of surrender.Robert Sparrow - 2015 - International Law Studies 91:699-728.
    Would it be ethical to deploy autonomous weapon systems (AWS) if they were unable to reliably recognize when enemy forces had surrendered? I suggest that an inability to reliably recognize surrender would not prohibit the ethical deployment of AWS where there was a limited window of opportunity for targets to surrender between the launch of the AWS and its impact. However, the operations of AWS with a high degree of autonomy and/or long periods of time between release and impact are (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Just War and Robots’ Killings.Thomas W. Simpson & Vincent C. Müller - 2016 - Philosophical Quarterly 66 (263):302-22.
    May lethal autonomous weapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Why moral philosophers should watch sci-fi movies.Nikil Mukerji - 2014 - In Fiorella Battaglia & Nathalie Weidenfeld (eds.), Roboethics in Film. Pisa, Italy: Pisa University Press. pp. 79-92.
    In this short piece, I explore why we, as moral philosophers, should watch sci-fi movies. Though I do not believe that sci-fi material is ne- cessary for doing good moral philosophy, I give three broad reasons why good sci-fi movies should nevertheless be worth our time. These reasons lie in the fact that they can illustrate moral-philosophical pro- blems, probe into possible solutions and, perhaps most importantly, an- ticipate new issues that may go along with the use of new technologies. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
    The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • War without virtue?Robert Sparrow - 2013 - In Bradley Jay Strawser (ed.), Killing by Remote Control: The Ethics of an Unmanned Military. New York, US: Oup Usa. pp. 84-105.
    A number of recent and influential accounts of military ethics have argued that there exists a distinctive “role morality” for members of the armed services—a “warrior code.” A “good warrior” is a person who cultivates and exercises the “martial” or “warrior” virtues. By transforming combat into a “desk job” that can be conducted from the safety of the home territory of advanced industrial powers without need for physical strength or martial valour, long-range robotic weapons, such as the “Predator” and “Reaper” (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • Machine ethics and the idea of a more-than-human moral world.Steve Torrance - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press. pp. 115.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.
    Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces? roadmaps since 2004. The idea is to have a staged move from ?man-in-the-loop? to ?man-on-the-loop? to full autonomy. While this may result in considerable military advantages, the policy raises ethical concerns with regard to potential breaches of International Humanitarian Law, including the Principle of Distinction and the Principle of Proportionality. Current applications of remote piloted robot planes or drones offer (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Drones, information technology, and distance: mapping the moral epistemology of remote fighting. [REVIEW]Mark Coeckelbergh - 2013 - Ethics and Information Technology 15 (2):87-98.
    Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.Bradley Jay Strawser - 2010 - Journal of Military Ethics 9 (4):342-368.
    A variety of ethical objections have been raised against the military employment of uninhabited aerial vehicles (UAVs, drones). Some of these objections are technological concerns over UAVs abilities’ to function on par with their inhabited counterparts. This paper sets such concerns aside and instead focuses on supposed objections to the use of UAVs in principle. I examine several such objections currently on offer and show them all to be wanting. Indeed, I argue that we have a duty to protect an (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Information Warfare: A Philosophical Perspective. [REVIEW]Mariarosaria Taddeo - 2012 - Philosophy and Technology 25 (1):105-120.
    This paper focuses on Information Warfare—the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Robots as Weapons in Just Wars.Marcus Schulzke - 2011 - Philosophy and Technology 24 (3):293-306.
    This essay analyzes the use of military robots in terms of the jus in bello concepts of discrimination and proportionality. It argues that while robots may make mistakes, they do not suffer from most of the impairments that interfere with human judgment on the battlefield. Although robots are imperfect weapons, they can exercise as much restraint as human soldiers, if not more. Robots can be used in a way that is consistent with just war theory when they are programmed to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Autonomous Military Robotics: Risk, Ethics, and Design.Patrick Lin, George Bekey & Keith Abney - unknown
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • The cubicle warrior: the marionette of digitalized warfare. [REVIEW]Rinie van Est - 2010 - Ethics and Information Technology 12 (3):289-296.
    In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • (1 other version)The philosophy of computer science.Raymond Turner - 2013 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Deference to Opaque Systems and Morally Exemplary Decisions.James Fritz - forthcoming - AI and Society:1-13.
    Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative Artificial Intelligence and Authorship Gaps.Tamer Nawar - 2024 - American Philosophical Quarterly 61 (4):355-367.
    The ever increasing use of generative artificial intelligence raises significant questions about authorship and related issues such as credit and accountability. In this paper, I consider whether works produced by means of users inputting natural language prompts into Generative Adversarial Networks are works of authorship. I argue that they are not. This is not due to concerns about randomness or machine-assistance compromising human labor or intellectual vision, but instead due to the syntactical and compositional limitations of existing AI systems in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Impactful Conceptual Engineering: Designing Technological Artefacts Ethically.Herman Veluwenkamp - forthcoming - Ethical Theory and Moral Practice:1-16.
    Conceptual engineering is the design, evaluation and implementation of concepts. Despite its popularity, some have argued that the methodology is not worthwhile, because the implementation of new concepts is both inscrutable and beyond our control. In the recent literature we see different responses to this worry. Some have argued that it is for political reasons just as well that implementation is such a difficult task, while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  • Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Justice by Algorithm: The Limits of AI in Criminal Sentencing.Isaac Taylor - 2023 - Criminal Justice Ethics 42 (3):193-213.
    Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Getting Machines to Do Your Dirty Work.Tomi Francis & Todd Karhu - forthcoming - Philosophical Studies:1-15.
    Autonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartek Chomanski - 2023 - Philosophy and Technology 36.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Case for 'Killer Robots': Why in the Long Run Martial AI May Be Good for Peace.Ognjen Arandjelović - 2023 - Journal of Ethics, Entrepreneurship and Technology 3 (1).
    Purpose: The remarkable increase of sophistication of artificial intelligence in recent years has already led to its widespread use in martial applications, the potential of so-called 'killer robots' ceasing to be a subject of fiction. -/- Approach: Virtually without exception, this potential has generated fear, as evidenced by a mounting number of academic articles calling for the ban on the development and deployment of lethal autonomous robots (LARs). In the present paper I start with an analysis of the existing ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Group blameworthiness and group rights.Stephanie Collins - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The following pair of claims is standardly endorsed by philosophers working on group agency: (1) groups are capable of irreducible moral agency and, therefore, can be blameworthy; (2) groups are not capable of irreducible moral patiency, and, therefore, lack moral rights. This paper argues that the best case for (1) brings (2) into question. Section 2 paints the standard picture, on which groups’ blameworthiness derives from their functionalist or interpretivist moral agency, while their lack of moral rights derives from their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Role of emotions in responsible military AI.José Kerstholt, Mark Neerincx, Karel van den Bosch, Jason S. Metcalfe & Jurriaan van Diggelen - 2023 - Ethics and Information Technology 25 (1):1-4.
    Download  
     
    Export citation  
     
    Bookmark