Switch to: References

Add citations

You must login to add citations.
  1. The communicative functions of metaphors between explanation and persuasion.Fabrizio Macagno & Maria Grazia Rossi - 2021 - In Fabrizio Macagno & Alessandro Capone (eds.), Inquiries in philosophical pragmatics. Theoretical developments. Cham: Springer. pp. 171-191.
    In the literature, the pragmatic dimension of metaphors has been clearly acknowledged. Metaphors are regarded as having different possible uses, and in particular, they are commonly viewed as instruments for pursuing persuasion. However, an analysis of the specific conversational purposes that they can be aimed at achieving in a dialogue and their adequacy thereto is still missing. In this paper, we will address this issue focusing on the distinction between the explanatory and persuasive goal. The difference between explanation and persuasion (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   137 citations  
  • Explanation–Question–Response dialogue: An argumentative tool for explainable AI.Federico Castagna, Peter McBurney & Simon Parsons - forthcoming - Argument and Computation:1-23.
    Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Knowledge graphs as tools for explainable machine learning: A survey.Ilaria Tiddi & Stefan Schlobach - 2022 - Artificial Intelligence 302 (C):103627.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Combining explanation and argumentation in dialogue.Floris Bex & Douglas Walton - 2016 - Argument and Computation 7 (1):55-68.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • An explanation-oriented inquiry dialogue game for expert collaborative recommendations.Qurat-ul-ain Shaheen, Katarzyna Budzynska & Carles Sierra - forthcoming - Argument and Computation:1-36.
    This work presents a requirement analysis for collaborative dialogues among medical experts and an inquiry dialogue game based on this analysis for incorporating explainability into multiagent system design. The game allows experts with different knowledge bases to collaboratively make recommendations while generating rich traces of the reasoning process through combining explanation-based illocutionary forces in an inquiry dialogue. The dialogue game was implemented as a prototype web-application and evaluated against the specification through a formative user study. The user study confirms that (...)
    Download  
     
    Export citation  
     
    Bookmark