Switch to: References

Add citations

You must login to add citations.
  1. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies.Eoin M. Kenny, Courtney Ford, Molly Quinn & Mark T. Keane - 2021 - Artificial Intelligence 294 (C):103459.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Chimeric U-Net – Modifying the standard U-Net towards explainability.Kenrick Schulze, Felix Peppert, Christof Schütte & Vikram Sunkara - 2025 - Artificial Intelligence 338 (C):104240.
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI under contract and tort law: legal incentives and technical challenges.Philipp Hacker, Ralf Krestel, Stefan Grundmann & Felix Naumann - 2020 - Artificial Intelligence and Law 28 (4):415-439.
    This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Show or suppress? Managing input uncertainty in machine learning model explanations.Danding Wang, Wencan Zhang & Brian Y. Lim - 2021 - Artificial Intelligence 294 (C):103456.
    Download  
     
    Export citation  
     
    Bookmark  
  • Black-box artificial intelligence: an epistemological and critical analysis.Manuel Carabantes - 2020 - AI and Society 35 (2):309-317.
    The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Hyper-Transcranial Alternating Current Stimulation: Experimental Manipulation of Inter-Brain Synchrony.Caroline Szymanski, Viktor Müller, Timothy R. Brick, Timo von Oertzen & Ulman Lindenberger - 2017 - Frontiers in Human Neuroscience 11.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Relation between prognostics predictor evaluation metrics and local interpretability SHAP values.Marcia L. Baptista, Kai Goebel & Elsa M. P. Henriques - 2022 - Artificial Intelligence 306:103667.
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing fidelity in XAI post-hoc techniques: A comparative study with ground truth explanations datasets.Miquel Miró-Nicolau, Antoni Jaume-I.-Capó & Gabriel Moyà-Alcover - 2024 - Artificial Intelligence 335 (C):104179.
    Download  
     
    Export citation  
     
    Bookmark  
  • Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps.Tobias Huber, Katharina Weitz, Elisabeth André & Ofra Amir - 2021 - Artificial Intelligence 301 (C):103571.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Embedding deep networks into visual explanations.Zhongang Qi, Saeed Khorram & Li Fuxin - 2021 - Artificial Intelligence 292:103435.
    Download  
     
    Export citation  
     
    Bookmark  
  • Tracking and classification performances in the bio-inspired asymmetric and symmetric networks.Naohiro Ishii, Kazunori Iwata & Tokuro Matsuo - forthcoming - Logic Journal of the IGPL.
    Machine learning, deep learning and neural networks are extensively applied for the development of many fields. Though their technologies are improved greatly, they are often said to be opaque in terms of explainability. Their explainable neural functions will be essential to realization in the networks. In this paper, it is shown that the bio-inspired networks are useful for the explanation of tracking and classification of features. First, the asymmetric network with nonlinear functions is created based on the bio-inspired retinal network. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Using Transcranial Alternating Current Stimulation to Improve Romantic Relationships Can Be a Promising Approach.Shen Liu, Ru Ma, Xiaoming Liu, Chong Zhang, Yijun Chen, Chenggong Jin, Hangwei Wang, Jiangtian Cui & Xiaochu Zhang - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Inverse Relative Age Effect in Male Alpine Skiers at the Absolute Top Level.Øyvind Bjerke, Arve Vorland Pedersen, Tore K. Aune & Håvard Lorås - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   1 citation