MarkTechPost@AI 前天 15:25
Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Qwen研究团队推出了QwenLong-L1,一种用于长文本推理任务的大型语言模型(LRM)强化学习(RL)框架。该框架分为三个关键阶段:预热监督微调(SFT)、课程引导的分阶段强化学习和难度感知回顾抽样。它整合了群相对RL优化中的最新进展,并通过结合基于规则的精确匹配验证和轻量级LLM的语义评估的混合奖励机制来确保策略训练期间的精确性和召回率。实验结果表明,QwenLong-L1在七个长文本文档问答基准测试中表现出色,优于其他基线模型,并展示了在训练过程中可解释的推理模式的出现。

🔥**QwenLong-L1框架**:该框架通过预热监督微调(SFT)提供稳定的策略模型初始化,确保模型具备基本的上下文理解和答案提取能力。

📚**课程引导分阶段强化学习**:通过逐步增加上下文长度的分阶段训练过程,使模型能够逐步获得长文本推理行为,而不会破坏策略更新的稳定性。

💡**难度感知回顾抽样**:通过维护和重用先前阶段的困难示例,并根据其难度进行加权,从而增强探索,鼓励更深入的推理和跨多样化输入的鲁棒性。

🏆**实验结果**:QwenLong-L1在DocMath、Frames等七个长文本问答基准上进行了评估,32B变体QwenLong-L1-32B表现出强大的经验性能,超越了R1-Distill-Qwen-32B等基线模型。

While large reasoning models (LRMs) have shown impressive capabilities in short-context reasoning through reinforcement learning (RL), these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization.

QwenLong-L1: A Structured RL Framework for Long-Context Adaptation

To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages:

These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training.

Technical Design and Methodological Advantages

QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation:

The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model (e.g., Qwen2.5-1.5B). This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings.

Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization.

Experimental Results and Benchmark Performance

QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance:

Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone.

Conclusion

QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training.


Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.

The post Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

QwenLong-L1 强化学习 长文本推理 大型语言模型
相关文章