Towards LLM-KG Symbiosis for Reducing Factual Hallucinations

Tommaso Dolci, Milos Jovanovik, Katja Hose

Proceedings of the Workshops of the EDBT/ICDT Joint Conference, 2026, vol. 4192.

Abstract

The widespread adoption of Large Language Models (LLMs) has increased concerns about hallucinations, i.e., the generation of incorrect or nonsensical claims. While popular approaches to reduce hallucinations (e.g., RAG) are promising, they still suffer from intrinsic hallucinations and remain largely limited to closed-domain scenarios, where the external source of knowledge is complete and sufficient to generate a response. Recently, knowledge graphs (KGs) have emerged as trustworthy sources to detect and mitigate hallucinations either before or after generation, but their adoption remains challenging for open-domain questions and long responses containing a mixture of correct, incorrect, and opinionated claims. This paper discusses the main opportunities and limitations of current approaches for reducing LLM hallucinations by KG grounding, and presents a framework for LLM- KG symbiosis to address the following open challenges: factuality assessment of multiple-claim responses, KG-grounded retrieval under incomplete data, and uncertainty management.

Download full text (PDF)

BibTeX

  @inproceedings{dolci2026towards,
    title={Towards LLM-KG Symbiosis for Reducing Factual Hallucinations},
    author={Dolci, Tommaso and Jovanovik, Milos and Hose, Katja},
    booktitle={Proceedings of the Workshops of the EDBT/ICDT Joint Conference},
    publisher={CEUR-WS.org},
    year={2026},
    volume={4192}
  }