MarkTechPost@AI 9小时前
OpenAI Just Released the Hottest Open-Weight LLMs: gpt-oss-120B (Runs on a High-End Laptop) and gpt-oss-20B (Runs on a Phone)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI打破常规,首次发布两款开源权重语言模型gpt-oss-120b和gpt-oss-20b,标志着AI领域进入一个全新的透明化、定制化和强大的计算能力时代。这些模型采用Apache 2.0许可,允许任何人下载、检查、微调并在自有硬件上运行,为研究人员、开发者和爱好者提供了前所未有的自由度。gpt-oss-120b性能媲美顶级商用模型,可在高端GPU上运行,支持长上下文和灵活的推理配置;gpt-oss-20b则专为移动和本地设备优化,能在消费级硬件上实现低延迟、私密的AI体验。两款模型均采用Mixture-of-Experts架构和MXFP4量化技术,在保证性能的同时优化了资源消耗,为企业、开发者和社区带来了极大的灵活性和创新空间。

✨ **OpenAI发布两款强大的开源模型**:gpt-oss-120b和gpt-oss-20b是OpenAI首次发布的开源权重语言模型,允许用户自由下载、研究、微调和部署。这标志着AI领域向更开放、透明和可定制化的方向迈进,打破了以往 proprietary 的技术壁垒。

🚀 **gpt-oss-120b性能卓越,配置灵活**:该模型拥有1170亿参数,性能达到OpenAI o4-mini级别,可在单块高端GPU(如Nvidia H100)上运行。它支持长达128,000 token的上下文,并具备可配置的“推理努力”选项,适用于复杂的任务,如研究自动化、技术写作和代码生成。

📱 **gpt-oss-20b专为本地化和移动优化**:拥有210亿参数,性能介于o3-mini和o4-mini之间,可在16GB RAM的消费级设备上运行,包括智能手机。该模型专为低延迟、私密的设备端AI设计,支持使用API、生成结构化输出和执行Python代码,非常适合边缘计算和移动应用。

💡 **技术创新带来高效能**:两款模型均采用Mixture-of-Experts (MoE)架构,仅激活部分专家子网络,实现高性能与低内存占用的平衡。结合MXFP4量化技术,进一步缩小了模型内存占用,使得部署更加便捷,尤其是在有限的硬件资源上。

🌐 **赋能多方应用场景**:对于企业而言,可在本地部署以确保数据隐私和合规性;对于开发者,提供了无API限制、完全可控的AI开发环境;对于社区,模型已在Hugging Face等平台上线,可快速实现部署和创新,推动AI技术的广泛应用和发展。

OpenAI has just sent seismic waves through the AI world: for the first time since GPT-2 hit the scene in 2019, the company is releasing not one, but TWO open-weight language models. Meet gpt-oss-120b and gpt-oss-20b—models that anyone can download, inspect, fine-tune, and run on their own hardware. This launch doesn’t just shift the AI landscape; it detonates a new era of transparency, customization, and raw computational power for researchers, developers, and enthusiasts everywhere.

Why Is This Release a Big Deal?

OpenAI has long cultivated a reputation for both jaw-dropping model capabilities and a fortress-like approach to proprietary tech. That changed on August 5, 2025. These new models are distributed under the permissive Apache 2.0 license, making them open for commercial and experimental use. The difference? Instead of hiding behind cloud APIs, anyone can now put OpenAI-grade models under their microscope—or put them directly to work on problems at the edge, in enterprise, or even on consumer devices.

Meet the Models: Technical Marvels with Real-World Muscle

gpt-oss-120B

gpt-oss-20B

Technical Details: Mixture-of-Experts and MXFP4 Quantization

Both models use a Mixture-of-Experts (MoE) architecture, only activating a handful of “expert” subnetworks per token. The result? Enormous parameter counts with modest memory usage and lightning-fast inference—perfect for today’s high-performance consumer and enterprise hardware.

Add to that native MXFP4 quantization, shrinking model memory footprints without sacrificing accuracy. The 120B model fits snugly onto a single advanced GPU; the 20B model can run comfortably on laptops, desktops, and even mobile hardware.

Real-World Impact: Tools for Enterprise, Developers, and Hobbyists

How Does GPT-OSS Stack Up?

Here’s the kicker: gpt-oss-120B is the first freely available open-weight model that matches the performance of top-tier commercial models like o4-mini. The 20B variant not only bridges the performance gap for on-device AI but will likely accelerate innovation and push boundaries on what’s possible with local LLMs.

The Future Is Open (Again)

OpenAI’s GPT-OSS isn’t just a release; it’s a clarion call. By making state-of-the-art reasoning, tool use, and agentic capabilities available for anyone to inspect and deploy, OpenAI throws open the door to an entire community of makers, researchers, and enterprises—not just to use, but to build on, iterate, and evolve.


Check out the gpt-oss-120B, gpt-oss-20B and  Technical Blog. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post OpenAI Just Released the Hottest Open-Weight LLMs: gpt-oss-120B (Runs on a High-End Laptop) and gpt-oss-20B (Runs on a Phone) appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI GPT-OSS 开源模型 语言模型 AI技术
相关文章