MarkTechPost@AI 04月11日
Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Together AI与Agentica团队合作发布了DeepCoder-14B-Preview,这是一个强大的开源代码推理模型。该模型基于DeepSeek-R1-Distilled-Qwen-14B,通过分布式强化学习进行微调,在LiveCodeBench (LCB)上实现了60.6%的Pass@1准确率,性能与o3-mini等领先模型相当,但仅使用了140亿参数,展现了出色的效率和能力。DeepCoder-14B的发布标志着代码生成领域的重要进展,并强调了高质量、可验证数据集的重要性。

💡 DeepCoder-14B-Preview在LiveCodeBench (LCB)上实现了60.6%的Pass@1准确率,性能与o3-mini等领先模型相当,展现了其强大的代码推理能力。

📚 该模型基于DeepSeek-R1-Distilled-Qwen-14B,通过分布式强化学习进行微调,在训练中使用了24,000个可验证的编码问题,确保了数据集的质量和多样性。

🛠️ DeepCoder-14B的训练使用了32个H100 GPU,耗时2.5周,强调了训练的可重复性和系统效率。此外,通过“verl-pipe”系统优化,训练速度翻倍,为未来的模型提供了可复用的框架。

✅ 为了验证代码的准确性,DeepCoder采用了双沙箱环境,在每个强化学习步骤中评估了超过1000个编码问题,确保每个模型生成的解决方案都经过了严格的测试。

🔓 DeepCoder完全开源,包括数据集、代码和训练日志,为社区驱动的开发铺平了道路,鼓励开发者参与和改进。

The demand for intelligent code generation and automated programming solutions has intensified, fueled by a rapid rise in software complexity and developer productivity needs. While natural language processing and general reasoning models have surged with significant breakthroughs, the coding domain has experienced slower progress. This lag is primarily attributed to the scarcity of high-quality, verifiable datasets critical for effectively training RL-based systems. Unlike mathematical problems, which benefit from a wealth of structured, verifiable examples online, coding tasks often suffer from noise, insufficient test coverage, and unverifiable outputs. Consequently, advancing LLMs for code generation has remained a formidable challenge until now.

DeepCoder-14B-Preview was released by Together AI in collaboration with the Agentica team. This powerful model was fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, and it demonstrates substantial progress in code reasoning. With a performance of 60.6% Pass@1 accuracy on the LiveCodeBench (LCB), DeepCoder-14B-Preview not only closes the gap with leading models like o3-mini-2025 but matches their output, all while using just 14 billion parameters, a notable feat in efficiency and capability.

The release is especially significant considering the benchmarks. DeepSeek-R1-Distill-Qwen-14B scores 53.0% on LCB, and DeepCoder-14B-Preview demonstrates an 8% leap in accuracy compared to its base model. Also, it competes toe-to-toe with established models, such as o3-mini (60.9%) and o1-2024-12-17 (59.5%) in accuracy and coding prowess. Regarding competitive coding metrics, it reaches a Codeforces rating of 1936 and a percentile of 95.3%, which are clear indicators of its real-world coding competence.

The model was trained over 2.5 weeks on 32 H100 GPUs using a curated dataset of 24,000 verifiable coding problems. This dataset was built by rigorously filtering existing resources to ensure quality and diversity. It combines problems from the TACO Verified set, PrimeIntellect’s SYNTHETIC-1, and entries from LiveCodeBench submitted between May 2023 and July 2024. The selection process emphasized programmatic verification of test cases, a minimum of five unit tests per problem, and deduplication to avoid data contamination. This helped maintain training integrity and maximize RL effectiveness.

To facilitate this level of validation, DeepCoder’s training incorporated a scalable code sandbox environment capable of executing massive parallel evaluations. Over 1,000 coding problems were assessed at each RL step using two robust sandboxes, the Together Code Interpreter and a local sandbox. These environments ensured that every model-generated solution was rigorously tested across multiple unit tests, filtering out reward hacking and encouraging genuine reasoning over memorization.

Also, the system architecture supporting DeepCoder was optimized through “verl-pipe,” an upgraded extension to the post-training RL pipeline that doubled training speed through systems-level improvements. This enhancement accelerates development cycles and provides a modular framework for others looking to build or iterate on similar LLMs in open-source ecosystems.

Some Key Takeaways from the release of DeepCoder-14B-Preview include:


Check out the Technical details, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit.

[Register Now] miniCON Virtual Conference on OPEN SOURCE AI: FREE REGISTRATION + Certificate of Attendance + 3 Hour Short Event (April 12, 9 am- 12 pm PST) + Hands on Workshop [Sponsored]

The post Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model That Rivals o3-Mini With Just 14B Parameters appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

DeepCoder-14B 代码推理 开源 人工智能
相关文章