MarkTechPost@AI 2024年09月23日
MAGICORE: An AI Framework for Multi Agent Iteration for Coarse-to-fine Refinement
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

MAGICORE是一种用于多智能体迭代的AI框架,通过粗细结合的优化方式提升LLM性能,解决了过度优化等问题,并在多个数据集上表现出色。

🎯MAGICORE将问题分类为简单或困难,对简单问题进行粗粒度聚合,对困难问题进行精细的多智能体迭代优化,避免过度优化。

💪该系统使用Solver、Reviewer和Refiner三个智能体,并通过逐步奖励模型(RM)分数进行错误定位和反馈,提升优化效果。

🚀MAGICORE在多个数学推理数据集上的表现优于Self-Refine和Best-of-k等方法,且随着迭代次数增加性能持续提升。

🌟MAGICORE采用多智能体协作和粗细结合的优化方式,避免了LLM自我验证的局限性,提高了推理能力和整体性能。

💰MAGICORE能自适应地为具有挑战性的问题分配计算资源,通过选择性优化提高准确性,且使用较少样本,避免过度校正。

Test-time aggregation strategies, such as generating and combining multiple answers, can enhance LLM performance but eventually hit diminishing returns. Refinement, where model feedback is used to improve answers iteratively, presents an alternative. However, it faces three challenges: (1) excessive refinement, which can lead to over-correction and reduced accuracy; (2) difficulty in identifying and addressing specific errors, as LLMs struggle with targeted self-correction; and (3) determining the right amount of refinement, as insufficient refinement can leave errors unresolved while excessive iterations waste computational resources.

Researchers at UNC-Chapel Hill introduced MAGICORE, a framework for Multi-Agent Iteration for Coarse-to-Fine Refinement. MAGICORE addresses excessive refinement by classifying problems as easy or hard, solving easy ones with coarse aggregation and hard ones with fine, iterative multi-agent refinement. The system uses three agents—Solver, Reviewer, and Refiner—enhanced by step-wise Reward Model (RM) scores for error localization and feedback. MAGICORE outperforms methods like Self-Refine and Best-of-k across multiple math reasoning datasets, with significant performance gains even after one iteration. It continues to improve with more iterations, highlighting its efficiency and refinement capabilities.

MAGICORE improves reasoning through multi-agent collaboration and coarse-to-fine refinement. While Self-Consistency (SC) generates multiple solutions and selects the most frequent answer, MAGICORE uses external RMs to guide refinement, avoiding SC’s limitations. Unlike past methods that rely on LLM self-verification, MAGICORE uses RMs to identify errors and refine responses effectively. It employs a multi-agent system, where agents take distinct roles—solver, reviewer, and refiner—to improve solutions iteratively. This approach avoids excessive or insufficient refinement and enhances performance across various tasks, outperforming aggregation methods and LLM-based self-evaluation techniques.

MAGICORE is an adaptive framework designed to enhance the performance and efficiency of multi-step reasoning in LLMs by using intelligent test-time aggregation and refinement. It categorizes problems as easy or hard, applying coarse aggregation for simpler tasks and fine-grained, iterative multi-agent refinement for more complex ones. The framework utilizes two reward models: an Outcome Reward Model (ORM) for overall solution quality and a Process Reward Model (PRM) for step-by-step accuracy. MAGICORE employs three agents—the Solver, Reviewer, and Refiner—to generate, evaluate, and improve solutions iteratively until optimal answers are achieved. This approach prevents excessive refinement, improves error localization, and ensures thorough solution enhancement.

MAGICORE surpasses all baseline methods after just one iteration, demonstrating a 3.2% improvement over Best-of-120 on Llama-3-8B while using half the samples. Compared to Self-Refine and Self-Refine with Self-Consistency, MAGICORE shows significant gains of up to 17.1% on Llama-3-8B and 5.4% over combined baselines. MAGICORE continues to enhance the accuracy as iterations increase, stabilizing at 75.6%, unlike fluctuating baselines. Additionally, MAGICORE efficiently uses fewer samples, avoids over-correction through selective refinement, and benefits from its multi-agent setup. Separate roles for Reviewer and Refiner further improve performance, highlighting MAGICORE’s effective adaptive refinement strategy.

MAGICORE adaptively allocates computational resources to challenging problems, using selective refinement for harder cases. It addresses excessive refinement, LLMs’ limitations in error detection, and insufficient refinement. By combining global and local reward models, MAGICORE determines which problems need refinement and uses iterative feedback to improve accuracy. Tested on math datasets and two models, MAGICORE consistently outperforms baseline methods, even those with higher computational demands. Unlike traditional strategies that stagnate, MAGICORE’s performance improves with additional iterations, highlighting the importance of selective refinement and multi-agent communication in enhancing problem-solving capabilities.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 50k+ ML SubReddit

The post MAGICORE: An AI Framework for Multi Agent Iteration for Coarse-to-fine Refinement appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MAGICORE 多智能体迭代 优化提升 LLM性能 数学推理
相关文章