LethalAutonomousWeapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that (...) the philosophical arguments that underpin the adoption of LAWs rather than not, although prima facie insufficient, can be actualized through the proposed Value Sensitive Design (VSD) approach. Hence, what is proposed is a principled design approach that can be used to embed stakeholder values into a design, encourage stakeholder cooperation and coordination and as a result promote social acceptance of LAWs as a preferable future fact of war. (shrink)
LethalAutonomousWeapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a (...) wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consistent abilities, should remove humans from war until a more formidable discovery is made in conducting ethical warfare. (shrink)
Arguments from human dignity feature prominently in the LethalAutonomousWeapons moral feasibility debate, even though their exists considerable controversy over their role and soundness and the notion of dignity remains under-defined. Drawing on the work of Dieter Birnbacher, I fix the sub-discourse as referring to the essential value of human persons in general, and to postulated moral rights of combatants not covered within the existing paradigm of the International Humanitarian Law in particular. I then review and (...) critique dignity-based arguments against LAWS: argument from faulty targeting process, argument from objectification, argument from underappreciation of the value of human life and the argument from the absence of mercy. I conclude that the argument from the absence of mercy is the only dignity-based argument that is both valid and irreducible to another class of arguments within the debate, and that it offers insufficient justification for a global ban on LAWS. (shrink)
To many, the idea of autonomousweapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, (...) human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS. (shrink)
The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of (...) moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic. (shrink)
Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is (...) false and where the use of Automated Weapons Systems would in fact be morally required. These include cases where a) once one has activated a weapon expected then to behave lethally, it would be appropriate to let it continue because this is part of a plan whose goodness one was best positioned to evaluate before activating the weapon; b) one expects better long-term consequences from allowing it to continue; c) allowing it to continue would express a decision you made to be resolute, a decision that could not have advantaged you had it not been true that you would carry through with it; d) the weapon is mechanically not recallable, so that, to not allow it to carry through, you would have had to refrain from activating it in the first place, something you expected would have disastrous consequences; e) you must deputize necessary killings to autonomous machines in order to protect yourself from guilt you shouldn’t have to bear; f) it would be morally better for the burden of responsibility for the killing to be shared among several agents, and the agents deputizing killing to machines can do this, especially where it’s not predictable which machine will be successful; g) a killing would be morally better done with elements of randomness and lack of deliberation, and a (relatively stupid) machine could do this where a person could not; h) the machine would be acting as a Doomsday Device, so that it could not have had its hoped for deterrent effect had you not ensured that you would be unable to recall it if enemy action activated it; i) letting it carry through is a necessary part of its own learning process, and you expect that this learning will have salutary effects later on; j) human intervention in the machine’s operation would disastrously impair its precision, or its speed and efficiency; k) using non-automated methods would require human resources you just don’t have in a task that nevertheless must be done (e.g., using land-mines to protect remote installations); l) the weapon has such horrible and indiscriminate power that it is doubtful whether it could be actually used in ways compatible with International Humanitarian Law and the Laws of War, which require that weapons be used only in ways respecting distinctness, necessity and proportionality, but its threat of use could respect these principles in affording deterrence provided human error cannot lead to their accidental deployment, this requiring that they be controlled by carefully designed autonomous and automatic systems. I then consider objections based on conceptions of human dignity and find that very often dignity too is best served by autonomous machine killing. Examples include saving your village by activating a robot to kill invading enemies who would inflict great indignity on your village, using a suicide robot to save yourself from a less dignified death at enemy hands, using a robotic drone to kill someone otherwise not accessible in order to restore dignity to someone this person killed and to his family, and using a robot to kill someone who needs killing, but the killing of whom by a human executioner would soil the executioner’s dignity. I conclude that what matters in rightful killing isn’t necessarily that it be under the direct control of a human, but that it be under the control of morality; and that could sometimes require use of an autonomous or automated device. (This paper was formerly called "Fire and Forget: A Defense of the Use of AutonomousWeapons in War" on Philpapers; the current title is the title of the published version.). (shrink)
Recently criticisms against autonomousweapons were presented in a video in which an AI-powered drone kills a person. However, some said that this video is a distraction from the real risk of AI—the risk of unlimitedly self-improving AI systems. In this article, we analyze arguments from both sides and turn them into conditions. The following conditions are identified as leading to autonomousweapons becoming a global catastrophic risk: 1) Artificial General Intelligence (AGI) development is delayed relative (...) to progress in narrow AI and manufacturing. 2) The potential for very cheap manufacture of drones, with prices below 1 USD each. 3) Anti-drone defense capabilities lagging offensive development. 4) Special global military posture encouraging development of drone swarms as a strategic offensive weapon, able to kill civilians. We conclude that while it is unlikely that drone swarms alone will become existential risk, lethalautonomousweapons could contribute to civilizational collapse in case of new world war. (shrink)
Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous Weapon Systems (AWS), defensive and offensive (the article deals only with the latter). Such AI-operated lethal machines of various forms (aerial, marine, continental) raise substantial ethical concerns. Interestingly, the topic of AWS was almost not treated in Jewish law and its research. This article thus proposes an introductory ethical-halakhic perspective (...) on AWS, in the Israeli context. The article has seven sections. Section 1 defines AWS and the main ethical concerns it evokes, while providing elementary definitions and distinctions. §2 locates AWS within the realm of Jewish laws of war (hilkhot-ẓava), which recognize the right for self-defense, as well as the status of universally accepted moral norms. §3 unfolds pragmatic ethical premises of a humane techno-ethics, which are required for the identifying AWS as a moral question: I. Relationality; II. Technology is not (completely) neutral; III. The fallaciousness of transhumanism. It is argued that these premises are compatible with halakhic tradition. §4 investigates the question of the morality of AWS, within the field of military AI ethics. It is clarified why the standard categories of war-ethics (ad bello, in bellum) do not capture the singular ethical problem of AWS, which pertains to the operation of military means, rather than their human targets. It is argued that reductive perception of the human mind is misleading about the feasibility of ‘ethical robots’, capable of independent moral discretion. To provide a thick examination of human agency from the perspective of Jewish tradition, §5 explores two stories from the biblical book of Samuel (the murder of Nob’s priests and of Uriah). The lessons about the significance of moral agency within the pubic-political military sphere are made explicit, as well as the possible costs resulting from the loss of human agency in the case of AWS. Given that realpolitik considerations are basic in halakhah, §6 considers some possible contemporary socio-political implications of the AWS, that may risk the sustainability of the democratic project. §7 concludes by pointing out humane contributions of Jewish law to contemporary techno-ethics. (shrink)
Will future lethalautonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take (...) responsibility away from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. Even with the complexity of Lethal Autonomy issues there is nothing particular to gain from being a low-tech Military. Lethal (...) class='Hi'>autonomousweapons are therefore independently capable of exhibiting positive or negative recessive norms of targeting in its perceptions of Discrimination between Civilian and Military Objects, Proportionality of Methods and Outcomes, Feasible Precaution before interdiction and its underlying Concepts of Humanity. Additionally Lethal Autonomy in Human-interacting Autonomous Robots is ubiquitous[designed and/or recessive]. This marks the completion of an Open PhD ( #openphd ) project done in sui generis form. (shrink)
The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of LethalAutonomous Weapon...
May lethalautonomousweapons systems—‘killer robots ’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We (...) show that the crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the re-distribution of risk. Not only is this possible in principle, but some killer robots will actually satisfy these requirements. An implication of our argument is that there is a public responsibility to regulate killer robots ’ design and manufacture. (shrink)
Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethalautonomousweapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before (...) using deadly force; robots can assess situations more quickly and do so without emotion, reducing the likelihood of fatal mistakes due to human error; and sending robots to war protects our own soldiers from harm. However, these arguments only point in favor of autonomousweapons systems, failing to demonstrate why such systems need be made *lethal*. In this paper I argue that if one grants all of the proponents' points in favor of LAWS, then, contrary to what might be expected, this leads to the conclusion that it would be both immoral and illegal to deploy *lethal* autonomousweapons, because the many features that speak in favor of them also undermine the need for them to be programmed to take lives. In particular, I argue that such systems, if lethal, would violate the moral and legal principle of necessity, which forbids the use of weapons that impose superfluous injury or unnecessary harm. I conclude by highlighting that the argument is not against autonomousweapons per se, but only against *lethal* autonomousweapons. (shrink)
Autonomousweapons systems (AWS), sometimes referred to as “killer robots”, are receiving evermore attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected with emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable (...) than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, we argue that each particular AWS will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS. (shrink)
While AutonomousWeapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as (...) much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about. (shrink)
Unlike human soldiers, autonomousweapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make (...) AWS critically vulnerable to hacking. I claim that the political and tactical consequences of hacked AWS far outweigh the purported advantages of AWS not being affected by psychological factors and always following orders. Therefore, one of the moral justifications for the deployment of AWS is undermined. (shrink)
LethalAutonomous Weapon Systems are here. Technological development will see them become widespread in the near future. This is in a matter of years rather than decades. When the UN Convention on Certain Conventional Weapons meets on 10-14th November 2014, well-considered guidance for a decision on the general policy direction for LAWS is clearly needed. While there is widespread opposition to LAWS—or ‘killer robots’, as they are popularly called—and a growing campaign advocates banning them outright, we argue (...) the opposite. LAWS may very well reduce suffering and death in war. Rather than banning them, they should be regulated, to ensure both compliance with international humanitarian law, and that this positive outcome occurs. This policy memo sets out the basic structure and content of the regulation required. (shrink)
Predictions about autonomous weapon systems are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrifi c consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear (...)weapons and their effect on the development of modern asymmetrical warfare are the best analogy to the introduction of AWS. The fi nal section focuses on this analogy and offers speculations about the likely consequences of AWS being hacked. These speculations tacitly draw on myths and tropes about technology and AI from popular fi ction, such as Frankenstein, to project a convincing model of the risks and benefi ts of AWS deployment. (shrink)
A brief overview of Autonomous Weapon Systems (AWS) and their different levels of autonomy is provided, followed by a discussion of the risks represented by these systems under the light of the just war principles and insights from research in cybersecurity. Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This commentary starts from the example of chemical weapons, now banned (...) worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “weaponization of Artificial Intelligence” (AI). We are led to conclude that AWS fail the discrimination principle, and that the only way of mitigating the risk they represent to humankind is the rapid negotiation of treaties for the implementation of an international zero-tolerance policy against the development and/or deployment of autonomous weapon systems. Given that scientific research on AWS is altogether lacking in the public domain, the viewpoint here is based on common sense rather than scientific evidence. Yet, the implications of the potential weaponization of our work as scientists, especially in the field of AI, are reaching further than we may think. The potential consequences of the deployment of AWS for citizen stakeholders are incommensurable. This viewpoint points towards good reasons why we need to raise awareness of the threats represented by AWS, and legal policies to ensure that these threats will not materialize. (shrink)
Autonomousweapons systems, often referred to as ‘killer robots’, have been a hallmark of popular imagination for decades. However, with the inexorable advance of artificial intelligence systems (AI) and robotics, killer robots are quickly becoming a reality. These lethal technologies can learn, adapt, and potentially make life and death decisions on the battlefield with little-to-no human involvement. This naturally leads to not only legal but ethical concerns as to whether we can meaningful control such machines, and if (...) so, then how. Such concerns are made even more poignant by the ever-present fear that something may go wrong, and the machine may carry out some action(s) violating the ethics or laws of war. -/- Researchers, policymakers, and designers are caught in the quagmire of how to approach these highly controversial systems and to figure out what exactly it means to have meaningful human control over them, if at all. -/- In Designed for Death, Dr Steven Umbrello aims to not only produce a realistic but also an optimistic guide for how, with human values in mind, we can begin to design killer robots. Drawing on the value sensitive design (VSD) approach to technology innovation, Umbrello argues that context is king and that a middle path for designing killer robots is possible if we consider both ethics and design as fundamentally linked. Umbrello moves beyond the binary debates of whether or not to prohibit killer robots and instead offers a more nuanced perspective of which types of killer robots may be both legally and ethically acceptable, when they would be acceptable, and how to design for them. (shrink)
Would it be ethical to deploy autonomous weapon systems (AWS) if they were unable to reliably recognize when enemy forces had surrendered? I suggest that an inability to reliably recognize surrender would not prohibit the ethical deployment of AWS where there was a limited window of opportunity for targets to surrender between the launch of the AWS and its impact. However, the operations of AWS with a high degree of autonomy and/or long periods of time between release and impact (...) are likely to remain controversial until they have the capacity to reliably recognize surrender. (shrink)
Please contact me at [email protected] if you are interested in reading a particular chapter or being sent the entire manuscript for private use. -/- The thesis offers a comprehensive argument in favor of a regulationist approach to autonomous weapon systems (AWS). AWS, defined as all military robots capable of selecting or engaging targets without direct human involvement, are an emerging and potentially deeply transformative military technology subject to very substantial ethical controversy. AWS have both their enthusiasts and their detractors, (...) prominently advocating for a global preemptive ban on AWS development and use. Rejecting both positions, the author outlines a middle-of-the-road regulationist approach that is neither overly restrictive nor overly permissive. The disqualifying flaws of the rival prohibitionist approach are demonstrated in the process. After defining the core term of autonomy in weapon systems, the practical difficulties involved in applying an arms control regime to AWS are analyzed. The analysis shows that AWS are an extremely regulation-resistant technology. This feature when combined with their assumed high military utility makes a ban framework extremely costly to impose and enforce. As such it is ultimately very likely to fail at the benefit of the most unscrupulous international actors and at a very substantial risk to those abiding with international law. Consequently, to be ethically viable, a prohibitionist framework would need to offer substantial moral benefits impossible to attain through the rival regulationist approach. The remainder of the thesis undertakes to demonstrate that this is not the case. Comparing the considerations of military and strategic necessity to humanitarian concerns most commonly voiced by prohibitionists requires finding a common denominator for all values being referred to. Consequently, the thesis proceeds to show that both kinds of concerns are ultimately reducible to respect for basic human rights of all stakeholders, and so that the prohibitionist and regulationist approach may ultimately be compared in terms of consequences their adoption would generate for basic human rights realization. The author then evaluates both the potential humanitarian benefits, and the potential humanitarian hazards of AWS introduction. The benefits of leaving frontline combat to machines are outlined, with the unique kinds of suffering that would be abolished by such a development being described in detail. The arguments against AWS adoption are then divided into three classes: arguments related to alleged impossibility of compliance with The Laws of Armed Conflict, non-consequentialist and broad consequentialist arguments. This analysis, which comprises the greater part of the entire thesis, shows that the concerns behind compliance arguments are indeed substantial and have to be accommodated via a complex framework of best practices, regulations and localized restrictions on some kinds of AWS or AWS use in particular environments. They do not, however, justify a universal ban on using all the diverse forms of AWS in all environments. Non-consequentialist objections are found either reducible to other classes of arguments or thoroughly unconvincing, sometimes to the point of being actually vacuous. Broad consequentialist concerns are likewise found to be accommodable by regulation, empirically unfounded or causally disconnected from the actions of legitimate actors acquiring AWS, and therefore irrelevant to the moral permissibility of such actions. The author concludes that the proponents of prohibitionism are unable to point to moral benefits substantial enough to justify the costs and risks inherent in the approach. A global ban is, in fact, likely to have a worse humanitarian impact that well-regulated AWS adoption even if the strategic risks are disregarded. On the other hand, the analysis shows that there indeed exists an urgent need to regulate AWS through a variety of technological, procedural and legal solutions. These include, but are not limited to, a temporary moratorium on anti-personnel AWS use, development of internationally verified compliance software and eventual legal requirement of its employment, a temporary moratorium on AWS proliferation to state actors and a ban on their proliferation to non-state agents. (shrink)
AutonomousWeapons Systems (AWS) have not gained a good reputation in the past. This attitude is odd if we look at the discussion of other-usually highly anticipated-AI-technologies, like autonomous vehicles (AVs); whereby even though these machines evoke very similar ethical issues, philosophers' attitudes towards them are constructive. In this article, I try to prove that there is an unjust bias against AWS because almost every argument against them is effective against AVs too. I start with the definition (...) of "AWS." Then, I arrange my arguments by the Just War Theory (JWT), covering jus ad bellum, jus in bello and jus post bellum problems. Meanwhile, I draw attention to similar problems against other AI-technologies outside the JWT framework. Finally, I address an exception, as addressed by Duncan Purves, Ryan Jenkins and Bradley Strawser, who realized the unjustified double standard, and deliberately tried to construct a special argument which rules out only AWS. (shrink)
Autonomous Weapon Systems (AWS) are the next logical advancement for military technology. There is a significant concern though that by allowing such systems on the battlefield, we are collectively abdicating our moral responsibility. In this thesis, I will examine two arguments that advocate for a total ban on the use of AWS. I call these arguments the “Responsibility” and the “Agency” arguments. After presenting these arguments, I provide my own objections and demonstrate why these arguments fail to convince. I (...) then provide an argument as to why the use of AWS is a rational choice in the evolution of warfare. I conclude my thesis by providing a framework upon which future international regulations regarding AWS could be built. (shrink)
The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently (...) the tracking and tracing of moral responsibility. To do this, this thesis marries two different levels of meaningful human control (MHC), termed levels of abstraction, to couple military operations with design ethics. In doing so, this thesis argues that the contentious notion of ‘full’ autonomy is not problematic under this two-tiered understanding of MHC. It proceeds to propose the value sensitive design (VSD) approach as a means for designing for MHC. (shrink)
This paper makes the case for arms control regimes to govern the development and deployment of autonomous weapon systems and long range uninhabited aerial vehicles.
We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomousweapons systems.
In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...) that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots. (shrink)
The world is in a state of crisis. Global problems that threaten our future include: the climate crisis; the destruction of natural habitats, catastrophic loss of wild life, and mass extinction of species; lethal modern war; the spread of modern armaments; the menace of nuclear weapons; pollution of earth, sea and air; rapid rise in the human population; increasing antibiotic resistance; the degradation of democratic politics, brought about in part by the internet. It is not just that universities (...) around the world have failed to help humanity solve these global problems; even worse, they have made the genesis of these problems possible. Modern science and technology, developed in universities, have made possible modern industry and agriculture, modern hygiene and medicine, modern power production and travel, modern armaments, which in turn make possible much that is good, all the great benefits of the modern world, but also all the global crises that now threaten our future. What has gone wrong? The fault lies with the whole conception of inquiry built into universities around the world. The basic idea is to help promote human welfare by, in the first instance, acquiring scientific knowledge and technological know-how. First, knowledge is to be acquired; once acquired, it can be applied to help solve social problems, and promote human welfare. But this basic idea is an intellectual disaster. Judged from the standpoint of promoting human welfare, it is profoundly and damagingly irrational, in a structural way. As a result of being restricted to the tasks of acquiring and applying knowledge, universities are prevented from doing what they most need to do to help humanity solve global problems, namely, engage actively with the public to promote action designed to solve global problems. We need urgently to bring about a revolution in universities around the world, wherever possible, so that their central task becomes to help humanity learn how to solve the climate crisis and other problems of living, local and global, so that we may make progress towards a good, civilized world. Almost every branch and aspect of the university needs to change. (shrink)
This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des drones qui sont manoeuvrés (...) à distance mais comportent une part importante de discernement humain, il s’agit de machines programmées pour défendre, attaquer, ou tuer de manière autonome. Les auteurs, philosophes, préfèrent prévenir de leur prochaine diffusion et obtenir des Nations Unies leur régulation. Une campagne internationale propose plutôt leur interdiction. (shrink)
The Tannhäuser Gate. Architecture in science fiction films of the second half of the 20th and the beginning of the 21st century as a component of utopian and dystopian projections of the future. -/- The films of science fiction genre from the second half of the 20th and early 21st century contained many visions of the future, which were at the same time a reflection on the achievements and deficiencies of modern times. In 1960s, cinematographic works were dominated by optimism (...) and faith in the possibility of never-ending progress. The disappearance of political divisions between the blocs of states and the joint exploration of the cosmos was foreseen. The designers undertook cooperation with scientists, which manifested itself in showing cosmic constructions far exceeding the real technical capabilities. Starting from the 1970s, pessimism and the belief that the future will bring, above all, the intensification of negative phenomena of the present began to grow in films. Fears of the future were connected with indicating various possible defects and insoluble contradictions between them. When, therefore, some dystopian visions illustrated the threat of increase in crime, others depicted the future as saturated with state control mechanisms and the prevalence of surveillance. The fears shown on the screens were also aroused by the growth of large corporations, especially by their gaining political influence or staying outside the system of democracy. The authors of the films also presented their suspicions related to the creation of new types of weapons by corporations, the use of which might breach the current legal norms. Particular objections concerned research on biological weapons and the possible spread of lethal viruses. The development of robotics and research into artificial intelligence, which must have resulted in the appearance of androids and inevitable tensions in their relations with humans, also triggered fear. Another problem for film-makers has become hybrids that are a combination of people and electronic parts. Scriptwriters and directors likewise considered the development of genetic engineering, which led to the creation of mutant human beings. A number of film dystopias contemplated the possibility of the collapse of democratic systems and the development of authoritarian regimes in their place, often based on broad public support. This kind of dystopia also includes films presenting the consequences of contemporary hedonism and consumerism. The problem is, however, that works critical of these phenomena were themselves advertisements for attractive products. (shrink)
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to (...) producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD’s values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published ‘A Method for Ethical AI in Defence’ (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI. (shrink)
The first thing we must keep in mind is that when saying that China says this or China does that, we are not speaking of the Chinese people, but of the Sociopaths who control the CCP -- Chinese Communist Party, i.e., the Seven Senile Sociopathic Serial Killers (SSSSK) of the Standing Committee of the CCP or the 25 members of the Politburo etc.. -/- The CCP’s plans for WW3 and total domination are laid out quite clearly in Chinese govt publications (...) and speeches and this is Xi Jinping’s “China Dream”. It is a dream only for the tiny minority (perhaps a few dozen to a few hundred) who rule China and a nightmare for everyone else (including 1.4 billion Chinese). The 10 billion dollars yearly enables them or their puppets to own or control newspapers, magazines, TV and radio channels and place fake news in most major media everywhere every day. In addition, they have an army (maybe millions of people) who troll all the media placing more propaganda and drowning out legitimate commentary (the 50 cent army). -/- In addition to stripping the 3rd world of resources, a major thrust of the multi-trillion dollar Belt and Road Initiative is building military bases worldwide. They are forcing the free world into a massive high-tech arms race that makes the cold war with the Soviet Union look like a picnic. -/- Though the SSSSK, and the rest of the world’s military, are spending huge sums on advanced hardware, it is highly likely that WW3 (or the smaller engagements leading up to it) will be software dominated. It is not out of the question that the SSSSK, with probably more hackers (coders) working for them then all the rest of the world combined, will win future wars with minimal physical conflict, just by paralyzing their enemies via the net. No satellites, no phones, no communications, no financial transactions, no power grid, no internet, no advanced weapons, no vehicles, trains, ships or planes. -/- There are only two main paths to removing the CCP, freeing 1.4 billion Chinese prisoners, and ending the lunatic march to WW3. The peaceful one is to launch an all-out trade war to devastate the Chinese economy until the military gets fed up and boots out the CCP. -/- An alternative to shutting down China’s economy is a limited war, such as a targeted strike by say 50 thermobaric drones on the 20th Congress of the CCP, when all the top members are in one place, but that won’t take place until 2022 so one could hit the annual plenary meeting. The Chinese would be informed, as the attack happened, that they must lay down their arms and prepare to hold a democratic election or be nuked into the stone age. The other alternative is an all-out nuclear attack. Military confrontation is unavoidable given the CCP’s present course. It will likely happen over the islands in the South China Sea or Taiwan within a few decades, but as they establish military bases worldwide it could happen anywhere (see Crouching Tiger etc.). Future conflicts will have hardkill and softkill aspects with the stated objectives of the CCP to emphasize cyberwar by hacking and paralyzing control systems of all military and industrial communications, equipment, power plants, satellites, internet, banks, and any device or vehicle connected to the net. The SS are slowly fielding a worldwide array of manned and autonomous surface and underwater subs or drones capable of launching conventional or nuclear weapons that may lie dormant awaiting a signal from China or even looking for the signature of US ships or planes. While destroying our satellites, thus eliminating communication between the USA and our forces worldwide, they will use theirs, in conjunction with drones to target and destroy our currently superior naval forces. Of course, all of this is increasingly done automatically by AI. -/- By far the biggest ally of the CCP is the Democratic party of the USA. The choice is to stop the CCP now or watch as they extend the Chinese prison over the whole world. -/- Of course, universal surveillance and digitizing of our lives is inevitable everywhere. Anyone who does not think so is profoundly out of touch. -/- Of course it is the optomists who expect the Chinese sociopaths to rule the world while the pessimists (who view themselves as realists) expect AI sociopathy (or AS as I call it – i.e., Artificial Stupidity or Artificial Sociopathy) to take over, perhaps by 2030 Those interested in further details on the lunatic path of modern society may consult my other works such as Suicide by Democracy-an Obituary for America and the World 3rd Edition 2019 and Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization 5th ed (2019) -/- . (shrink)
The first thing we must keep in mind is that when saying that China says this or China does that, we are not speaking of the Chinese people, but of the Sociopaths who control the CCP -- Chinese Communist Party, i.e., the Seven Senile Sociopathic Serial Killers (SSSSK) of the Standing Committee of the CCP or the 25 members of the Politburo etc.. -/- The CCP’s plans for WW3 and total domination are laid out quite clearly in Chinese govt publications (...) and speeches and this is Xi Jinping’s “China Dream”. It is a dream only for the tiny minority (perhaps a few dozen to a few hundred) who rule China and a nightmare for everyone else (including 1.4 billion Chinese). The 10 billion dollars yearly enables them or their puppets to own or control newspapers, magazines, TV and radio channels and place fake news in most major media everywhere every day. In addition, they have an army (maybe millions of people) who troll all the media placing more propaganda and drowning out legitimate commentary (the 50 cent army). -/- In addition to stripping the 3rd world of resources, a major thrust of the multi-trillion dollar Belt and Road Initiative is building military bases worldwide. They are forcing the free world into a massive high-tech arms race that makes the cold war with the Soviet Union look like a picnic. -/- Though the SSSSK, and the rest of the world’s military, are spending huge sums on advanced hardware, it is highly likely that WW3 (or the smaller engagements leading up to it) will be software dominated. It is not out of the question that the SSSSK, with probably more hackers (coders) working for them then all the rest of the world combined, will win future wars with minimal physical conflict, just by paralyzing their enemies via the net. No satellites, no phones, no communications, no financial transactions, no power grid, no internet, no advanced weapons, no vehicles, trains, ships or planes. -/- There are only two main paths to removing the CCP, freeing 1.4 billion Chinese prisoners, and ending the lunatic march to WW3. The peaceful one is to launch an all-out trade war to devastate the Chinese economy until the military gets fed up and boots out the CCP. -/- An alternative to shutting down China’s economy is a limited war, such as a targeted strike by say 50 thermobaric drones on the 20th Congress of the CCP, when all the top members are in one place, but that won’t take place until 2022 so one could hit the annual plenary meeting. The Chinese would be informed, as the attack happened, that they must lay down their arms and prepare to hold a democratic election or be nuked into the stone age. The other alternative is an all-out nuclear attack. Military confrontation is unavoidable given the CCP’s present course. It will likely happen over the islands in the South China Sea or Taiwan within a few decades, but as they establish military bases worldwide it could happen anywhere (see Crouching Tiger etc.). Future conflicts will have hardkill and softkill aspects with the stated objectives of the CCP to emphasize cyberwar by hacking and paralyzing control systems of all military and industrial communications, equipment, power plants, satellites, internet, banks, and any device or vehicle connected to the net. The SS are slowly fielding a worldwide array of manned and autonomous surface and underwater subs or drones capable of launching conventional or nuclear weapons that may lie dormant awaiting a signal from China or even looking for the signature of US ships or planes. While destroying our satellites, thus eliminating communication between the USA and our forces worldwide, they will use theirs, in conjunction with drones to target and destroy our currently superior naval forces. Of course, all of this is increasingly done automatically by AI. -/- By far the biggest ally of the CCP is the Democratic party of the USA. -/- The choice is to stop the CCP now or watch as they extend the Chinese prison over the whole world. -/- Of course, universal surveillance and digitizing of our lives is inevitable everywhere. Anyone who does not think so is profoundly out of touch. -/- Of course, it is the optimists who expect the Chinese sociopaths to rule the world while the pessimists (who view themselves as realists) expect AI sociopathy (or AS as I call it – i.e., Artificial Stupidity or Artificial Sociopathy) to take over, perhaps by 2030. -/- Those interested in further details on the lunatic path of modern society may consult my other works such as Suicide by Democracy-an Obituary for America and the World 4th Edition 2019 and Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization 5th ed (2019) . (shrink)
The first thing we must keep in mind is that when saying that China says this or China does that, we are not speaking of the Chinese people, but of the Sociopaths who control the CCP -- Chinese Communist Party, i.e., the Seven Senile Sociopathic Serial Killers (SSSSK) of the Standing Committee of the CCP or the 25 members of the Politburo etc.. -/- The CCP’s plans for WW3 and total domination are laid out quite clearly in Chinese govt publications (...) and speeches and this is Xi Jinping’s “China Dream”. It is a dream only for the tiny minority (perhaps a few dozen to a few hundred) who rule China and a nightmare for everyone else (including 1.4 billion Chinese). The 10 billion dollars yearly enables them or their puppets to own or control newspapers, magazines, TV and radio channels and place fake news in most major media everywhere every day. In addition, they have an army (maybe millions of people) who troll all the media placing more propaganda and drowning out legitimate commentary (the 50 cent army). -/- In addition to stripping the 3rd world of resources, a major thrust of the multi-trillion dollar Belt and Road Initiative is building military bases worldwide. They are forcing the free world into a massive high-tech arms race that makes the cold war with the Soviet Union look like a picnic. -/- Though the SSSSK, and the rest of the world’s military, are spending huge sums on advanced hardware, it is highly likely that WW3 (or the smaller engagements leading up to it) will be software dominated. It is not out of the question that the SSSSK, with probably more hackers (coders) working for them then all the rest of the world combined, will win future wars with minimal physical conflict, just by paralyzing their enemies via the net. No satellites, no phones, no communications, no financial transactions, no power grid, no internet, no advanced weapons, no vehicles, trains, ships or planes. -/- There are only two main paths to removing the CCP, freeing 1.4 billion Chinese prisoners, and ending the lunatic march to WW3. The peaceful one is to launch an all-out trade war to devastate the Chinese economy until the military gets fed up and boots out the CCP. -/- An alternative to shutting down China’s economy is a limited war, such as a targeted strike by say 50 thermobaric drones on the 20th Congress of the CCP, when all the top members are in one place, but that won’t take place until 2022 so one could hit the annual plenary meeting. The Chinese would be informed, as the attack happened, that they must lay down their arms and prepare to hold a democratic election or be nuked into the stone age. The other alternative is an all-out nuclear attack. Military confrontation is unavoidable given the CCP’s present course. It will likely happen over the islands in the South China Sea or Taiwan within a few decades, but as they establish military bases worldwide it could happen anywhere (see Crouching Tiger etc.). Future conflicts will have hardkill and softkill aspects with the stated objectives of the CCP to emphasize cyberwar by hacking and paralyzing control systems of all military and industrial communications, equipment, power plants, satellites, internet, banks, and any device or vehicle connected to the net. The SS are slowly fielding a worldwide array of manned and autonomous surface and underwater subs or drones capable of launching conventional or nuclear weapons that may lie dormant awaiting a signal from China or even looking for the signature of US ships or planes. While destroying our satellites, thus eliminating communication between the USA and our forces worldwide, they will use theirs, in conjunction with drones to target and destroy our currently superior naval forces. Of course, all of this is increasingly done automatically by AI. -/- By far the biggest ally of the CCP is the Democratic party of the USA. The choice is to stop the CCP now or watch as they extend the Chinese prison over the whole world. -/- Of course, universal surveillance and digitizing of our lives is inevitable everywhere. Anyone who does not think so is profoundly out of touch. -/- Of course it is the optomists who expect the Chinese sociopaths to rule the world while the pessimists (who view themselves as realists) expect AI sociopathy (or AS as I call it – i.e., Artificial Stupidity or Artificial Sociopathy) to take over, perhaps by 2030 Those interested in further details on the lunatic path of modern society may consult my other works such as Suicide by Democracy-an Obituary for America and the World 3rd Edition 2019 and Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization 5th ed (2019) -/- . (shrink)
In this paper I propose and develop a social account of global autonomy. On this view, a person is autonomous simply to the extent to which it is difficult for others to subject her to their wills. I argue that many properties commonly thought necessary for autonomy are in fact properties that tend to increase an agent’s immunity to such interpersonal subjection, and that the proposed account is therefore capable of providing theoretical unity to many of the otherwise heterogeneous (...) requirements of autonomy familiar from recent discussions. Specifically, I discuss three such requirements: (i) possession of legally protected status, (ii) a sense of one’s own self-worth, and (iii) a capacity for critical reflection. I argue that the proposed account is not only theoretically satisfying but also yields a rich and attractive conception of autonomy. (shrink)
Since at least 2016, many have worried that social media enables authoritarians to meddle in democratic politics. The concern is that trolls and bots amplify deceptive content. In this chapter I argue that these tactics have a more insidious anti-democratic purpose. Lies implanted in democratic discourse by authoritarians are often intended to be caught. Their primary goal is not to successfully deceive, but rather to undermine the democratic value of testimony. In well-functioning democracies, our mutual reliance on testimony also generates (...) a distinctively democratic value: decentralized testimonial networks evade control by the state or powerful actors. The deliberate exposure of deception in democratic testimonial networks undermines this value by implicating citizens in their own epistemic corruption, weakening the resilience of democratic society against authoritarian pressure. In this chapter I illustrate that danger through a close reading of recent Russian social media interference operations, showing both their epistemic underpinnings and their ongoing political threat. (shrink)
The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We (...) contend, however, that both contractarian approaches and harm minimisation standards are flawed, due to a failure to account for the fundamental difference between those ‘involved’ and ‘uninvolved’ in an impending crash. Drawing from classical works on the trolley problem, we show how this notion can be substantiated by reference to either the distinction between negative and positive rights, or to differences in people’s claims. By supplementing harm minimisation with corresponding constraints, we can develop crash algorithms for autonomous cars which are both ethically adequate and promise to overcome certain significant practical barriers to implementation. (shrink)
The digital revolution, in the form of autonomous driving, is changing the very essence of mobility. This paper discusses four different ways in which these transformations are taking place and argues that public policies and business strategies need to focus on innovating and re-engineering (enveloping) whole environments. Only then will autonomous vehicles become an ordinary – and environmentally sustainable – reality. -/- .
In this paper, we argue that solutions to normative challenges associated with autonomous driving, such as real-world trolley cases or distributions of risk in mundane driving situations, face the problem of reasonable pluralism: Reasonable pluralism refers to the fact that there exists a plurality of reasonable yet incompatible comprehensive moral doctrines within liberal democracies. The corresponding problem is that a politically acceptable solution cannot refer to only one of these comprehensive doctrines. Yet a politically adequate solution to the normative (...) challenges of autonomous driving need not come at the expense of an ethical solution, if it is based on moral beliefs that are (1) shared in an overlapping consensus and (2) systematized through public reason. Therefore, we argue that a Rawlsian justificatory framework is able to adequately address the normative challenges of autonomous driving and elaborate on how such a framework might be employed for this purpose. (shrink)
In October 2011, the “2nd European Network for Cognitive Systems, Robotics and Interaction”, EUCogII, held its meeting in Groningen on “Autonomous activity in real-world environments”, organized by Tjeerd Andringa and myself. This is a brief personal report on why we thought autonomy in real-world environments is central for cognitive systems research and what I think I learned about it. --- The theses that crystallized are that a) autonomy is a relative property and a matter of degree, b) increasing autonomy (...) of an artificial system from its makers and users is a necessary feature of increasingly intelligent systems that can deal with the real-world and c) more such autonomy means less control but at the same time improved interaction with the system. (shrink)
Several recent lines of inquiry have pointed to the amygdala as a potential lesion site in autism. Because one function of the amygdala may be to produce autonomic arousal at the sight of a significant face, we compared the responses of autistic children to their mothers’ face and to a plain paper cup. Unlike normals, the autistic children as a whole did not show a larger response to the person than to the cup. We also monitored sympathetic activity in autistic (...) children as they engaged in a wide range of everyday behaviours. The children tended to use self-stimulation activities in order to calm hyper-responsive activity of the sympathetic (`fight or flight’) branch of the autonomic nervous system. A small percentage of our autistic subjects had hyporesponsive sympathetic activity, with essentially no electrodermal responses except to self-injurious behaviour. We sketch a hypothesis about autism according to which autistic children use overt behaviour in order to control a malfunctioning autonomic nervous system and suggest that they have learned to avoid using certain processing areas in the temporal lobes. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt (...) to determine (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
This paper argues that culture itself can be a weapon against the disentitled within cultures, and against members of other cultures; and when cultures are unjust and hegemonic, the theft of and destruction of elements of their culture can be a justifiable weapon of self-defense by the oppressed. This means that in at least some conflicts, those that are really insurgencies against oppression, such theft and destruction should not be seen as war crimes, but as legitimate military maneuvers. The paper (...) also argues that in general it is better for wars to be prosecuted by the theft and destruction of cultural property rather than by means of killing and debasing of lives, so that, again, these things should not be disincentivized by being classed as war crimes, but in fact should be the preferred methods of war. This makes it all the more problematic to have these things counted as war crimes when killing and rape are not. In the course of these arguments, the distinction is made between people and their culture; and the question is mooted whether the destruction of cultural artifacts is an evil, and if so, how great an evil. Finally, an argument is given against the view that it is wrong for art and culture experts to give assessments for the value of artifacts because this will be the enabling of the theft and destruction of artifacts and their cultures. If we do not place value on things, we cannot know what is most good and so most worth preserving in cultures and their artifacts. So we must carry on with judging, and then make sure we act to prevent the exploitation of the things we have rightly come to value. (shrink)
This chapter discusses how research in situationist social psychology may pose largely undiscussed threats to autonomous agency, free will, and moral responsibility.
When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design (...) class='Hi'>autonomous systems, and the impossibility of fully predicting how autonomous systems will behave once deployed, determining who is morally responsible for harms caused by autonomous systems is unclear at a conceptual level. I review past work on this topic, criticizing prior works for suggesting workarounds rather than philosophical answers to the conceptual problem presented by the responsibility gap. The view I develop, drawing on my earlier work on vicarious moral responsibility, explains why computing professionals are ethically required to take responsibility for the systems they design, despite not being blameworthy for the harms these systems may cause. (shrink)
This paper discusses the epistemic status of biology from the standpoint of the systemic approach to living systems based on the notion of biological autonomy. This approach aims to provide an understanding of the distinctive character of biological systems and this paper analyses its theoretical and epistemological dimensions. The paper argues that, considered from this perspective, biological systems are examples of emergent phenomena, that the biological domain exhibits special features with respect to other domains, and that biology as a discipline (...) employs some core concepts, such as teleology, function, regulation among others, that are irreducible to those employed in physics and chemistry. It addresses the claim made by Jacques Monod that biology as a science is marginal. It argues that biology is general insofar as it constitutes a paradigmatic example of complexity science, both in terms of how it defines the theoretical object of study and of the epistemology and heuristics employed. As such, biology may provide lessons that can be applied more widely to develop an epistemology of complex systems. (shrink)
This volume brings together the advanced research results obtained by the European COST Action 2102: “Cross Modal Analysis of Verbal and Nonverbal Communication”. The research published in this book was discussed at the 3rd joint EUCOGII-COST 2102 International Training School entitled “Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces: Theoretical and Practical Issues ” and held in Caserta, Italy, on March 15-19, 2010.
Autonomous vehicles (AVs) are expected to improve road traffic safety and save human lives. It is also expected that some AVs will encounter so-called dilemmatic situations, like choosing between saving two passengers by sacrificing one pedestrian or choosing between saving three pedestrians by sacrificing one passenger. These expectations fuel the extensive debate over the ethics settings of AVs: the way AVs should be programmed to act in dilemmatic situations and who should decide about the nature of this programming in (...) the first place. In the article, the ethics settings problem is analyzed as a trilemma between AVs with personal ethics setting (PES), AVs with mandatory ethics setting (MES) and AVs with no ethics settings (NES). It is argued that both PES and MES, by being programmed to choose one human life over the other, are bound to cause serious moral damage resulting from the violation of several principles central to deontology and utilitarianism. NES is defended as the only plausible solution to this trilemma, that is, as the solution that sufficiently minimizes the number of traffic fatalities without causing any comparable moral damage. (shrink)
This article interrogates the bureaucratization of war, incarnate in the covert lethal drone. Bureaucracies are criticized typically for their complexity, inefficiency, and inflexibility. This article is concerned with their moral indifference. It explores killing, which is so highly administered, so morally remote, and of such scale, that we acknowledge a covert lethal program. This is a bureaucratized program of assassination in contravention of critical human rights. In this article, this program is seen to compromise the advance of global (...) justice. Moreover, the bureaucratization of lethal force is seen to dissolve democratic ideals from within. The bureaucracy isolates the citizens from lethal force applied in their name. People are killed, in the name of the State, but without conspicuous justification, or judicial review, and without informed public debate. This article gives an account of the risk associated with the bureaucratization of the State’s lethal power. Exemplified by the covert drone, this is power with formidable reach. It is power as well, which requires great moral sensitivity. Considering the drone program, this article identifies challenges, which will become more prominent and pressing, as technology advances. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.