Citations of:
Add citations
You must login to add citations.
|
|
Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...) |
|
ABSTRACTAutonomous weapons systems pose many challenges in complex battlefield environments. Previous discussions of them have largely focused on technological or policy issues. In contrast, we focus here on the challenge of trust in an AWS. One type of human trust depends only on judgments about the predictability or reliability of the trustee, and so are suitable for all manner of artifacts. However, AWSs that are worthy of the descriptor “autonomous” will not exhibit the required strong predictability in the complex, changing (...) |
|
ABSTRACTTwo categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military autonomous systems in the air and in the water. These issues are systematized with the Laws of Armed Conflict as a backdrop. The second category concerns whether autonomous systems may affect the ethical interpretation of LOAC. It is argued that some terms in LOAC are vague and can be interpreted differently depending on which ethical normative (...) |
|
One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...) |