Ontologies, arguments, and Large-Language Models

In Ítalo Oliveira (ed.), Joint Ontologies Workshops (JOWO). Twente, Netherlands: CEUR. pp. 1-9 (2024)
  Copy   BIBTEX

Abstract

Abstract The explosion of interest in large language models (LLMs) has been accompanied by concerns over the extent to which generated outputs can be trusted, owing to the prevalence of bias, hallucinations, and so forth. Accordingly, there is a growing interest in the use of ontologies and knowledge graphs to make LLMs more trustworthy. This rests on the long history of ontologies and knowledge graphs in constructing human-comprehensible justification for model outputs as well as traceability concerning the impact of evidence on other evidence. Understanding the nature of arguments and argumentation is critical to each, especially when LLM output conflicts with what is expected by users. The central contribution of this article is to extend the Arguments Ontology (ARGO) - an ontology specific to the domain of argumentation and evidence broadly construed - into the space of LLM fact-checking in the interest of promoting justification and traceability research through the use of ARGO-based ‘blueprints’.

Author Profiles

John Beverley
University at Buffalo
Francesco Franda
Université de Neuchâtel
Barry Smith
University at Buffalo

Analytics

Added to PP
yesterday

Downloads
0

6 months
0

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?