AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas

In Proceedings of AISC 2017 (2018)
  Copy   BIBTEX

Abstract

Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacity” problem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems.

Author's Profile

Antonio Lieto
University of Turin

Analytics

Added to PP
2018-10-01

Downloads
434 (#52,782)

6 months
102 (#53,978)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?