Unexplainability and Incomprehensibility of Artificial Intelligence

Download Edit this record How to cite View on PhilPapers
Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.
PhilPapers/Archive ID
Revision history
Archival date: 2019-06-24
View upload history
References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Added to PP index

Total views
271 ( #11,979 of 41,628 )

Recent downloads (6 months)
271 ( #1,281 of 41,628 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks to external links.