Abstract
In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures).
AI hallucinations may have their source in the data itself, that is, the source content, or in the training procedure, i.e. the way the knowledge was encoded in the model’s parameters, so that errors in encoding and decoding textual and non-textual representations can cause hallucinations. In this paper, we will observe how such errors come to life and how they might be mitigated. For this purpose, we will analyze the usability of justification logics, to behave as a proof checker for validating the correctness of large language models’ (LLM) responses. Justification logic was developed by S. Artemov, and later on mostly by Artemov and M. Fitting, deriving its main idea from the logic of proofs (LP): knowledge and belief modalities are seen as justification terms, i.e. t:X stands for t is a (proper) justification for X. Justification logic originated from attempts to create semantics for intuitionistic logic where proofs were the most proper justifications, but in further development, justification logic could be applied to different kinds of justifications).
With the recent attempts to mitigate incorrect LLM responses, we will analyze various guardrails that are currently used for LLM responses, and see how the logic of justification may provide its benefits as an AI safety layer against false data.