Switch to: References

Add citations

You must login to add citations.
  1. Autonomous military systems beyond human control: putting an empirical perspective on value trade-offs for autonomous systems design in the military.Christine Boshuijzen-van Burken, Martijn de Vries, Jenna Allen, Shannon Spruit, Niek Mouter & Aylin Munyasya - forthcoming - AI and Society:1-17.
    The question of human control is a key concern in autonomous military systems debates. Our research qualitatively and quantitatively investigates values and concerns of the general public, as they relate to autonomous military systems, with particular attention to the value of human control. Using participatory value evaluation (PVE), we consulted 1980 Australians about which values matter in relation to two specific technologies: an autonomous minesweeping submarine and an autonomous drone that can drop bombs. Based on value sensitive design, participants were (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Risk-Based Regulatory Approach to Autonomous Weapon Systems.Alexander Blanchard, Claudio Novelli, Luciano Floridi & Mariarosaria Taddeo - manuscript
    International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and establish risk tolerance thresholds through a risk matrix informed by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain.Christopher Thomas, Alexander Blanchard & Mariarosaria Taddeo - 2024 - Philosophy and Technology 37 (1):1-21.
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) define the criteria to inform purpose- and (...)
    Download  
     
    Export citation  
     
    Bookmark