Abstract
Abstract
The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to
which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly,
there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This
rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification
for model outputs as well as traceability concerning the impact of evidence on other evidence. Understanding
the nature of arguments and argumentation is critical to each, especially when LLM output conflicts with what
is expected by users. The central contribution of this article is to extend the Arguments Ontology (ARGO) -
an ontology specific to the domain of argumentation and evidence broadly construed - into the space of LLM
fact-checking in the interest of promoting justification and traceability research through the use of ARGO-based
‘blueprints’.