Switch to: Citations

Add references

You must login to add references.
  1. Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   162 citations  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • (1 other version)Computer Science as Empirical Inquiry: Symbols and Search.Allen Newell & H. A. Simon - 1976 - Communications of the Acm 19:113-126.
    Download  
     
    Export citation  
     
    Bookmark   279 citations  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Comparative Analysis of the Definitions of Autonomous Weapons Systems.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Science and Engineering Ethics 28 (5):1-22.
    In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Meaningful human control as reason-responsiveness: the case of dual-mode vehicles.Giulio Mecacci & Filippo Santoni de Sio - 2020 - Ethics and Information Technology 22 (2):103-115.
    In this paper, in line with the general framework of value-sensitive design, we aim to operationalize the general concept of “Meaningful Human Control” in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems. First, we connect and compare meaningful human control with (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   138 citations  
  • (1 other version)The Society of Mind.Marvin Minsky - 1987 - The Personalist Forum 3 (1):19-32.
    Download  
     
    Export citation  
     
    Bookmark   455 citations  
  • Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.A. Barredo Arrieta, N. Díaz-Rodríguez, J. Ser, A. Bennetot, S. Tabik & A. Barbado - 2020 - Information Fusion 58.
    Download  
     
    Export citation  
     
    Bookmark   84 citations  
  • “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems.Heather M. Roff & David Danks - 2018 - Journal of Military Ethics 17 (1):2-20.
    ABSTRACTAutonomous weapons systems pose many challenges in complex battlefield environments. Previous discussions of them have largely focused on technological or policy issues. In contrast, we focus here on the challenge of trust in an AWS. One type of human trust depends only on judgments about the predictability or reliability of the trustee, and so are suitable for all manner of artifacts. However, AWSs that are worthy of the descriptor “autonomous” will not exhibit the required strong predictability in the complex, changing (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Autonomous Weapon Systems: A Clarification.Nathan Gabriel Wood - 2023 - Journal of Military Ethics 22 (1):18-32.
    Due to advances in military technology, there has been an outpouring of research on what are known as autonomous weapon systems (AWS). However, it is common in this literature for arguments to be made without first making clear exactly what definitions one is employing, with the detrimental effect that authors may speak past one another or even miss the targets of their arguments. In this article I examine the U.S. Department of Defense and International Committee of the Red Cross definitions (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Legal reviews of in situ learning in autonomous weapons.Zena Assaad & Tim McFarland - 2023 - Ethics and Information Technology 25 (1):1-10.
    A legal obligation to conduct weapons reviews is a means by which the international community can ensure that States assess whether the use of new types of weapons in armed conflict would raise humanitarian concerns. The use of artificial intelligence in weapon systems greatly complicates the process of conducting reviews, particularly where a weapon system is capable of continuing to ‘learn’ on its own after being deployed on the battlefield. This paper surveys current understandings of the weapons review challenges presented (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).A. Adadi & M. Berrada - 2018 - IEEE Access 6.
    Download  
     
    Export citation  
     
    Bookmark   75 citations