Knowledge Graph Based Retrieval-Augmented Generation for Multi-Hop Question Answering Enhancement
Published in 2024 15th International Conference on Information and Knowledge Technology (IKT), 2024
Authors
Mahdi Amiri Shavaki, Pouria Omrani, Ramin Toosi, Mohammad Ali Akhaee
Abstract
Large Language Models (LLMs) and Knowledge Graphs (KGs) offer a promising approach to robust and explainable Question Answering (QA). While LLMs excel at natural language understanding, they suffer from knowledge gaps and hallucinations. KGs provide structured knowledge but lack natural language interaction. This paper proposes a system that integrates LLMs and KGs without requiring training, ensuring adaptability across different KGs with minimal human effort. The resulting approach, dubbed Knowledge Graph-extended Retrieval Augmented Generation (KG-RAG), includes a question decomposition module to enhance multi-hop information retrieval and answer explainability. Using In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting, it generates explicit reasoning chains processed separately to improve truthfulness. Experiments on the MetaQA benchmark show increased accuracy for multi-hop questions, demonstrating KG-RAG’s potential to improve transparency in QA by bridging unstructured language understanding with structured knowledge retrieval.
