cs.AI updates on arXiv.org 05月27日 13:05
Done Is Better than Perfect: Unlocking Efficient Reasoning by Structured Multi-Turn Decomposition
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出了一种名为多轮分解(MinD)的新方法,旨在解决大型推理模型(LRM)中Chain-of-Thought(CoT)推理链过长导致的高延迟问题。MinD将传统的CoT分解为一系列显式、结构化的多轮交互,模型在每一轮中提供一个思考单元和相应的答案。后续轮次可以反思、验证、修改或探索早期轮次的思考和答案。这种方法不仅加速了答案的输出,还允许用户显式控制迭代推理过程。通过监督微调(SFT)和强化学习(RL),MinD在MATH数据集上实现了高达70%的输出token使用量和首个token时间(TTFT)的降低,同时保持了在推理基准测试上的竞争力。

💡MinD方法的核心在于将传统的CoT推理过程分解为多个显式的、结构化的多轮交互,每个回合都包含一个思考单元和对应的答案,从而便于管理和控制推理过程。

✅MinD允许模型在后续轮次中反思、验证、修改或探索先前轮次的思考和答案,这种迭代式的改进机制能够提高答案的准确性和可靠性。

⏱️通过监督微调(SFT)和强化学习(RL)的结合,MinD在降低输出token使用量和首个token时间(TTFT)方面表现出色,在MATH数据集上实现了高达70%的降低,显著提升了推理效率。

📊实验结果表明,MinD在MATH-500、AIME24、AMC23和GPQA-Diamond等推理基准测试中保持了具有竞争力的性能,证明了其在提升推理效率的同时,并未牺牲推理准确性。

arXiv:2505.19788v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) are criticized for the excessively lengthy Chain-of-Thought (CoT) to derive the final answer, suffering from high first-token and overall latency. Typically, the CoT of LRMs mixes multiple thinking units; each unit attempts to produce a candidate answer to the original query. Hence, a natural idea to improve efficiency is to reduce the unit number. Yet, the fact that the thinking units in vanilla CoT cannot be explicitly managed renders doing so challenging. This paper introduces Multi-Turn Decomposition (MinD) to decode conventional CoT into a sequence of explicit, structured, and turn-wise interactions to bridge the gap. In MinD, the model provides a multi-turn response to the query, where each turn embraces a thinking unit and yields a corresponding answer. The subsequent turns can reflect, verify, revise, or explore alternative approaches to both the thinking and answer parts of earlier ones. This not only makes the answer delivered more swiftly, but also enables explicit controls over the iterative reasoning process (i.e., users may halt or continue at any turn). We follow a supervised fine-tuning (SFT) then reinforcement learning (RL) paradigm to realize MinD. We first rephrase the outputs of an LRM into multi-turn formats by prompting another LLM, and then tune the LRM with such data. Observing that the tuned model tends to consume even more tokens than the original one (probably due to that the multi-turn formats introduce additional answer tokens), we advocate leveraging RL algorithms like GRPO to prioritize correct outputs with fewer turns. Trained on the MATH dataset using R1-Distill models, MinD can achieve up to ~70% reduction in both output token usage and time to first token (TTFT), while maintaining competitive performance on reasoning benchmarks such as MATH-500, AIME24, AMC23, and GPQA-Diamond.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

多轮分解 大型推理模型 Chain-of-Thought 强化学习 推理效率
相关文章