MarkTechPost@AI 04月13日 09:05
LightPROF: A Lightweight AI Framework that Enables Small-Scale Language Models to Perform Complex Reasoning Over Knowledge Graphs (KGs) Using Structured Prompts
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

LightPROF是一个创新的轻量级AI框架,旨在提升小规模语言模型在知识图谱(KGs)上的推理能力。该框架通过精确的知识图谱检索和高效的编码,使小模型能够执行复杂的推理任务。LightPROF的核心优势在于其检索模块使用关系作为基本检索单元,限制检索范围,嵌入模块采用紧凑的基于Transformer的知识适配器,推理模块结合嵌入表示向量和精心设计的提示。实验结果表明,LightPROF在WebQSP和CWQ数据集上均显著优于现有模型,同时在资源利用方面也表现出色,为小规模语言模型在知识图谱推理领域提供了新思路。

💡LightPROF框架的核心在于其独特的设计,它通过检索、嵌入和推理三个模块协同工作,提升了小规模语言模型在知识图谱上的推理能力。检索模块以关系为单位进行检索,缩小了信息搜索范围,从而提高了效率。

🔑LightPROF框架采用了紧凑的Transformer-based知识适配器,用于处理图谱结构,并有效地将信息整合到模型中。这种设计使得LightPROF能够利用较小的模型实现高效推理。

🚀实验结果表明,LightPROF在WebQSP数据集上实现了83.7%的准确率,在更具挑战性的CWQ数据集上也达到了59.3%的准确率。这些结果证明了LightPROF在处理多跳和复杂推理挑战方面的有效性。

⚡️LightPROF在资源利用方面表现出色,与StructGPT相比,处理时间减少了30%,输入token使用量减少了98%。这使得LightPROF更具成本效益,并为小规模语言模型提供了更广阔的应用前景。

Large Language Models (LLMs) have revolutionized natural language processing, with abilities on complex zero-shot tasks through extensive training data and vast parameters. However, LLMs often struggle with knowledge-intensive tasks due to limited task-specific prior knowledge and understanding capabilities. LLMs need access to reliable and continuously updated knowledge bases for effective reasoning, with Knowledge Graphs (KGs) being ideal candidates due to their structured semantic framework. Current approaches to LLM reasoning on KGs encounter two obstacles: representing KG content as extensive text fails to convey rich logical relationships within the graph structure, and retrieval and reasoning processes demand numerous LLM calls and substantial reasoning power.

Prompt engineering has emerged as a critical technique for expanding LLM capabilities across various applications without modifying model parameters. The field has evolved from simple zero-shot and few-shot prompts to more complex approaches like Chain-of-Thought (CoT), Tree-of-Thoughts (ToT), and Graph-of-Thoughts (GoT). KG-based LLM reasoning has gained traction as KGs provide explicit, structured knowledge that enhances LLMs’ knowledge awareness with clear logical structures. More flexible solutions like KAPING, KGGPT, StructGPT, ToG, and KnowledgeNavigator construct LLM prompts using KG factual information with various techniques like semantic similarity retrieval, multi-step reasoning frameworks, and beam search on KGs to enhance reasoning capabilities.

Researchers from Beijing University of Posts and Telecommunications, Hangzhou Dianzi University, Singapore Management University, National University of Singapore, Institute of Computing Technology at Chinese Academy of Sciences, and Xi’an Jiaotong University have proposed LightPROF, a Lightweight and efficient Prompt learning-ReasOning Framework. The RetrieveEmbed-Reason framework enables small-scale LLMs to perform stable retrieval and efficient reasoning on KGs. It contains three core components: Retrieval, Embedding, and Reasoning modules. The Retrieval uses relations as fundamental retrieval units and limits the scope based on question semantics, the Embedding uses a compact Transformer-based Knowledge Adapter, and the Reasoning combines embedded representation vectors with carefully designed prompts. LightPROF supports various open-source LLMs and KGs while only requiring Knowledge Adapter tuning during training.

LightPROF is evaluated on two Freebase-based public datasets: WebQuestionsSP (WebQSP) and ComplexWebQuestions (CWQ). WebQSP serves as a benchmark with fewer questions (4,737) but a larger KG, and CWQ is designed for complex KG question answering with 34,689 question-answer pairs built upon WebQSP. Performance is measured using match accuracy (Hits@1), which evaluates whether the model’s top answer is correct. LightPROF is compared against three categories of baseline methods: full fine-tuning approaches (including KV-Mem, EmbedKGQA, TransferNet, NSM, etc), vanilla LLM methods (featuring LLaMa series models), and LLM+KGs methods (such as StructGPT, ToG, KnowledgeNavigator, and AgentBench).

LightPROF significantly outperforms state-of-the-art models, achieving 83.7% accuracy on the WebQSP dataset and 59.3% on the more challenging CWQ dataset. These results validate LightPROF’s effectiveness in handling multi-hop and complex reasoning challenges in KG question answering. When integrating different LLMs within the framework, LightPROF consistently enhances performance regardless of the baseline capabilities of the original models. This plug-and-play integration strategy eliminates the need for costly LLM fine-tuning. Efficiency evaluations against StructGPT reveal LightPROF’s superior resource utilization, with a 30% reduction in processing time, 98% reduction in input token usage, and significantly lower tokens per request.

In conclusion, researchers introduced LightPROF, a novel framework that enhances LLM reasoning through accurate retrieval and efficient encoding of KGs. It narrows the retrieval scope by sampling KGs using stable relationships as units. Researchers developed a complex Knowledge Adapter that effectively parses graph structures and integrates information to enable efficient reasoning with smaller LLMs. It condenses reasoning graphs into fewer tokens while achieving comprehensive alignment with LLM input space through the Projector component. Future research directions include developing KG encoders with strong generalization capabilities that can be applied to unseen KG data without retraining and designing unified cross-modal encoders capable of handling multimodal KGs.


Check out Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

The post LightPROF: A Lightweight AI Framework that Enables Small-Scale Language Models to Perform Complex Reasoning Over Knowledge Graphs (KGs) Using Structured Prompts appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LightPROF 语言模型 知识图谱 推理框架
相关文章