MarkTechPost@AI 2024年07月30日
Recursive IntroSpEction (RISE): A Machine Learning Approach for Fine-Tuning LLMs to Improve Their Own Responses Over Multiple Turns Sequentially
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

RISE(Recursive IntroSpEction)是一种新颖的微调大型语言模型(LLM)的方法,通过将单轮问题转化为多轮马尔可夫决策过程,利用在线模仿学习和强化学习原理,使LLM能够在多轮交互中迭代地检测和纠正错误,从而提高其推理能力。

🤔 RISE将单轮问题转化为多轮马尔可夫决策过程(MDP),将提示视为初始状态,模型的响应作为动作,将当前状态、模型的动作和固定的自省提示连接起来形成下一个状态,并根据答案的正确性分配奖励。

🚀 RISE利用在线模仿学习和强化学习,通过对MDP框架中的数据进行收集和学习,采用蒸馏来自更强大模型或自我蒸馏来生成改进的响应,并使用奖励加权监督学习来训练模型,使其能够在后续尝试中改进预测。

💪 RISE在多个基准测试中展示了显著的性能提升。在GSM8K上,RISE将LLama2基础模型的五轮性能分别提高了15.1%和17.7%,在MATH上也观察到3.4%和4.6%的提升。这些增益超过了其他方法,包括仅提示的自我完善和标准微调。

🌟 RISE能够在不同的基础模型上保持有效性,Mistral-7B + RISE的表现优于专门针对数学推理进行微调的Eurus-7B-SFT。此外,RISE的自我蒸馏版本也显示出希望,即使完全使用自我生成的数据和监督,也能提高5轮性能。

✨ RISE提供了一种独特的方法,用于微调大型语言模型,以改善其在多轮交互中的响应。通过将单轮问题转换为多轮马尔可夫决策过程,RISE利用在线策略回滚数据上的迭代强化学习,使用专家或自我生成的监督。该方法显著提高了7B模型在推理任务中的自我改进能力,优于以前的方法。结果表明,在不同的基础模型和任务中,性能一致提高,证明了真正的顺序错误校正。

Large language models (LLMs) have gained significant attention as powerful tools for various tasks, but their potential as general-purpose decision-making agents presents unique challenges. To function effectively as agents, LLMs must go beyond simply generating plausible text completions. They need to exhibit interactive, goal-directed behavior to accomplish specific tasks. This requires two critical abilities: actively seeking information about the task and making decisions that can be improved through “thinking” and verification at inference time. Current methodologies struggle to achieve these capabilities, particularly in complex tasks requiring logical reasoning. While LLMs often possess the necessary knowledge, they frequently fail to apply it effectively when asked to correct their own mistakes sequentially. This limitation highlights the need for a more robust approach to enable test-time self-improvement in LLM agents.

Researchers have attempted various approaches to enhance the reasoning and thinking capabilities of foundation models for downstream applications. These methods primarily focus on developing prompting techniques for effective multi-turn interaction with external tools, sequential refinement of predictions through reflection, thought verbalization, self-critique and revision, or using other models for response criticism. While some of these approaches show promise in improving responses, they often rely on detailed error traces or external feedback to succeed.

Prompting techniques, although useful, have limitations. Studies indicate that intrinsic self-correction guided solely by the LLM itself is often infeasible for off-the-shelf models, even when they possess the required knowledge to tackle the prompt. Fine-tuning LLMs to obtain self-improvement capabilities has also been explored, using strategies such as training on self-generated responses, learned verifiers, search algorithms, contrastive prompting on negative data, and iterated supervised or reinforcement learning.

However, these existing methods primarily focus on improving single-turn performance rather than introducing the capability to enhance performance over sequential turns of interaction. While some work has explored fine-tuning LLMs for multi-turn interaction directly via reinforcement learning, this approach addresses different challenges than those posed by single-turn problems in multi-turn scenarios.

Researchers from Carnegie Mellon University, UC Berkeley, and MultiOn present RISE (Recursive IntroSpEction), a unique approach to enhance LLMs’ self-improvement capabilities. This method employs an iterative fine-tuning procedure that frames single-turn prompts as multi-turn Markov decision processes. By incorporating principles from online imitation learning and reinforcement learning, RISE develops strategies for multi-turn data collection and training. This approach enables LLMs to recursively detect and correct mistakes in subsequent iterations, a capability previously thought challenging to attain. Unlike traditional methods focusing on single-turn performance, RISE aims to instill dynamic self-improvement in LLMs, potentially revolutionizing their problem-solving abilities in complex scenarios.

RISE presents an innovative approach to fine-tune foundation models for self-improvement over multiple turns. The method begins by converting single-turn problems into a multi-turn Markov Decision Process (MDP). This MDP construction transforms prompts into initial states, with model responses serving as actions. The next state is created by concatenating the current state, the model’s action, and a fixed introspection prompt. Rewards are based on answer correctness. RISE then employs strategies for data collection and learning within this MDP framework. The approach uses either distillation from a more capable model or self-distillation to generate improved responses. Finally, RISE applies reward-weighted supervised learning to train the model, enabling it to enhance its predictions over sequential attempts.

RISE demonstrates significant performance improvements across multiple benchmarks. On GSM8K, RISE boosted the LLama2 base model’s five-turn performance by 15.1% and 17.7% after one and two iterations respectively, without using an oracle. On MATH, improvements of 3.4% and 4.6% were observed. These gains surpass those achieved by other methods, including prompting-only self-refinement and standard fine-tuning on oracle data. Notably, RISE outperforms sampling multiple responses in parallel, indicating its ability to genuinely correct mistakes over sequential turns. The method’s effectiveness persists across different base models, with Mistral-7B + RISE outperforming Eurus-7B-SFT, a model specifically fine-tuned for math reasoning. Also, a self-distillation version of RISE shows promise, improving 5-turn performance even with entirely self-generated data and supervision.

RISE  introduces a unique approach for fine-tuning Large Language Models to improve their responses over multiple turns. By converting single-turn problems into multi-turn Markov Decision Processes, RISE employs iterative reinforcement learning on on-policy rollout data, using expert or self-generated supervision. The method significantly enhances self-improvement abilities of 7B models on reasoning tasks, outperforming previous approaches. Results show consistent performance gains across different base models and tasks, demonstrating genuine sequential error correction. While computational constraints currently limit the number of training iterations, especially with self-generated supervision, RISE presents a promising direction for advancing LLM self-improvement capabilities.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 47k+ ML SubReddit

Find Upcoming AI Webinars here

The post Recursive IntroSpEction (RISE): A Machine Learning Approach for Fine-Tuning LLMs to Improve Their Own Responses Over Multiple Turns Sequentially appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

大型语言模型 推理 强化学习 自我改进 RISE
相关文章