MarkTechPost@AI 03月20日
MemQ: Enhancing Knowledge Graph Question Answering with Memory-Augmented Query Reconstruction
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

MemQ是一个记忆增强框架,旨在分离大型语言模型(LLM)在知识图谱问答(KGQA)中的推理和工具调用,从而减少幻觉。MemQ通过引入查询记忆模块,改进了查询重建并提升了推理的清晰度。该方法实现了自然语言推理,同时减少了工具使用中的错误。在WebQSP和CWQ基准测试上的实验表明,MemQ优于现有方法,取得了最先进的结果。通过解决工具利用和推理之间的混淆,MemQ增强了LLM生成响应的可读性和准确性,为KGQA提供了一种更有效的方法。

🧠 MemQ通过构建结构化的查询记忆,利用LLM生成的分解查询语句描述,实现了独立的推理,从而将推理与工具调用分离。这种方法通过生成清晰的推理步骤并基于语义相似性检索相关记忆,增强了可读性。

🔍 MemQ通过三个关键任务实现:记忆构建、知识推理和查询重建。记忆构建涉及存储查询语句及其对应的自然语言描述,以实现高效检索;知识推理过程生成结构化的多步骤推理计划,确保回答查询的逻辑递进;查询重建则基于语义相似性检索相关查询语句,并将它们组装成最终查询。

📊 实验结果表明,MemQ在WebQSP和CWQ基准测试中取得了最先进的性能,证明了其在增强基于LLM的KGQA推理方面的有效性。分析实验突出了其卓越的结构和边缘准确性,而消融研究证实了MemQ在工具利用和推理稳定性方面的有效性。

LLMs have shown strong performance in Knowledge Graph Question Answering (KGQA) by leveraging planning and interactive strategies to query knowledge graphs. Many existing approaches rely on SPARQL-based tools to retrieve information, allowing models to generate accurate answers. Some methods enhance LLMs’ reasoning abilities by constructing tool-based reasoning paths, while others employ decision-making frameworks that use environmental feedback to interact with knowledge graphs. Although these strategies have improved KGQA accuracy, they often blur the distinction between tool use and actual reasoning. This confusion reduces interpretability, diminishes readability, and increases the risk of hallucinated tool invocations, where models generate incorrect or irrelevant responses due to over-reliance on parametric knowledge.

To address these limitations, researchers have explored memory-augmented techniques that provide external knowledge storage to support complex reasoning. Prior work has integrated memory modules for long-term context retention, enabling more reliable decision-making. Early KGQA methods used key-value memory and graph neural networks to infer answers, while recent LLM-based approaches leverage large-scale models for enhanced reasoning. Some strategies employ supervised fine-tuning to improve understanding, while others use discriminative techniques to mitigate hallucinations. However, existing KGQA methods still struggle to separate reasoning from tool invocation, leading to a lack of focus on logical inference. 

Researchers from the Harbin Institute of Technology propose Memory-augmented Query Reconstruction (MemQ), a framework that separates reasoning from tool invocation in LLM-based KGQA. MemQ establishes a structured query memory using LLM-generated descriptions of decomposed query statements, enabling independent reasoning. This approach enhances readability by generating explicit reasoning steps and retrieving relevant memory based on semantic similarity. MemQ improves interpretability and reduces hallucinated tool use by eliminating unnecessary tool reliance. Experimental results show that MemQ achieves state-of-the-art performance on WebQSP and CWQ benchmarks, demonstrating its effectiveness in enhancing LLM-based KGQA reasoning.

MemQ is designed to separate reasoning from tool invocation in LLM-based KGQA through three key tasks: memory construction, knowledge reasoning, and query reconstruction. Memory construction involves storing query statements with corresponding natural language descriptions for efficient retrieval. The knowledge reasoning process generates structured multi-step reasoning plans, ensuring logical progression in answering queries. Query reconstruction then retrieves relevant query statements based on semantic similarity and assembles them into a final query. MemQ enhances reasoning by fine-tuning LLMs with explanation-statement pairs and uses an adaptive memory recall strategy, outperforming prior methods on WebQSP and CWQ benchmarks with state-of-the-art results.

The experiments assess MemQ’s performance in knowledge graph question-answering using WebQSP and CWQ datasets. Hits@1 and F1 scores serve as evaluation metrics, with comparisons against tool-based baselines like RoG and ToG. MemQ, built on Llama2-7b, outperforms previous methods, showing improved reasoning via a memory-augmented approach. Analytical experiments highlight superior structural and edge accuracy. Ablation studies confirm MemQ’s effectiveness in tool utilization and reasoning stability. Additional analyses explore reasoning errors, hallucinations, data efficiency, and model universality, demonstrating its adaptability across architectures. MemQ significantly enhances structured reasoning while reducing errors in multi-step queries.

In conclusion, the study introduces MemQ, a memory-augmented framework that separates LLM reasoning from tool invocation to reduce hallucinations in KGQA. MemQ improves query reconstruction and enhances reasoning clarity by incorporating a query memory module. The approach enables natural language reasoning while mitigating errors in tool usage. Experiments on WebQSP and CWQ benchmarks demonstrate that MemQ outperforms existing methods, achieving state-of-the-art results. By addressing the confusion between tool utilization and reasoning, MemQ enhances the readability and accuracy of LLM-generated responses, offering a more effective approach to KGQA.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

The post MemQ: Enhancing Knowledge Graph Question Answering with Memory-Augmented Query Reconstruction appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MemQ 知识图谱问答 记忆增强 LLM
相关文章