Certifiable AI

Applied Sciences 12 (3):1050 (2022)
  Copy   BIBTEX

Abstract

Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications.

Author's Profile

Jobst Landgrebe
State University of New York (SUNY)

Analytics

Added to PP
2022-02-19

Downloads
220 (#64,703)

6 months
64 (#63,658)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?