Epistemic considerations when AI answers questions for us

Abstract

In this position paper, we argue that careless reliance on AI to answer our questions and to judge our output is a violation of Grice’s Maxim of Quality as well as a violation of Lemoine’s legal Maxim of Innocence, performing an (unwarranted) authority fallacy, and while lacking assessment signals, committing Type II errors that result from fallacies of the inverse. What is missing in the focus on output and results of AI-generated and AI-evaluated content is, apart from paying proper tribute, the demand to follow a person’s thought process (or a machine’s decision processes). In deliberately avoiding Neural Networks that cannot explain how they come to their conclusions, we introduce logic-symbolic inference to handle any possible epistemics any human or artificial information processor may have. Our system can deal with various belief systems and shows how decisions may differ for what is true, false, realistic, unrealistic, literal, or anomalous. As is, stota AI such as ChatGPT is a sorcerer’s apprentice.

Author's Profile

Juliet J.-Y. Chen
Laboratory for Artificial Intelligence In Design (AiDLab)

Analytics

Added to PP
2023-04-23

Downloads
208 (#69,342)

6 months
159 (#20,338)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?