AI News 2024年07月20日
Mistral AI and NVIDIA unveil 12B NeMo model
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Mistral AI 与 NVIDIA 合作推出了 120 亿参数模型 NeMo,该模型具有 128,000 个令牌的上下文窗口,在推理、世界知识和编码准确性方面表现出色,并提供开源访问权限。

🤔 **卓越的性能和易用性:** Mistral NeMo 在推理、世界知识和编码准确性方面表现出色,并提供 128,000 个令牌的上下文窗口。它旨在成为目前使用 Mistral 7B 的系统的无缝替代品,并依靠标准架构,易于使用。

🚀 **开源访问权限:** Mistral AI 已经将预训练基础模型和指令微调检查点发布在 Apache 2.0 许可证下,鼓励采用和进一步研究。这将加速模型在各种应用程序中的整合。

🌐 **多语言支持:** Mistral NeMo 支持多种语言,包括英语、法语、德语、西班牙语、意大利语、葡萄牙语、中文、日语、韩语、阿拉伯语和印地语,使其成为全球应用的理想选择。

💪 **高效的令牌化:** Mistral NeMo 引入了 Tekken,一种基于 Tiktoken 的新令牌化器,该令牌化器在压缩自然语言文本和源代码方面比以前 Mistral 模型中使用的 SentencePiece 令牌化器更有效。

🤝 **与 NVIDIA 的合作:** Mistral NeMo 可作为 NVIDIA NIM 推理微服务,通过 ai.nvidia.com 提供,简化了已经投资 NVIDIA AI 生态系统的组织的部署。

📊 **性能比较:** Mistral AI 提供了 Mistral NeMo 基础模型与 Gemma 2 9B 和 Llama 3 8B 这两种最新的开源预训练模型之间的性能比较,展示了 Mistral NeMo 的优势。

💡 **量化感知训练:** Mistral NeMo 在训练期间具有量化意识,这使得 FP8 推理无需损害性能。这对于希望高效部署大型语言模型的组织来说至关重要。

Mistral AI has announced NeMo, a 12B model created in partnership with NVIDIA. This new model boasts an impressive context window of up to 128,000 tokens and claims state-of-the-art performance in reasoning, world knowledge, and coding accuracy for its size category.

The collaboration between Mistral AI and NVIDIA has resulted in a model that not only pushes the boundaries of performance but also prioritises ease of use. Mistral NeMo is designed to be a seamless replacement for systems currently using Mistral 7B, thanks to its reliance on standard architecture.

In a move to encourage adoption and further research, Mistral AI has made both pre-trained base and instruction-tuned checkpoints available under the Apache 2.0 license. This open-source approach is likely to appeal to researchers and enterprises alike, potentially accelerating the model’s integration into various applications.

One of the key features of Mistral NeMo is its quantisation awareness during training, which enables FP8 inference without compromising performance. This capability could prove crucial for organisations looking to deploy large language models efficiently.

Mistral AI has provided performance comparisons between the Mistral NeMo base model and two recent open-source pre-trained models: Gemma 2 9B and Llama 3 8B.

“The model is designed for global, multilingual applications. It is trained on function calling, has a large context window, and is particularly strong in English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi,” explained Mistral AI.

“This is a new step toward bringing frontier AI models to everyone’s hands in all languages that form human culture.”

Mistral NeMo introduces Tekken, a new tokeniser based on Tiktoken. Trained on over 100 languages, Tekken offers improved compression efficiency for both natural language text and source code compared to the SentencePiece tokeniser used in previous Mistral models. The company reports that Tekken is approximately 30% more efficient at compressing source code and several major languages, with even more significant gains for Korean and Arabic.

Mistral AI also claims that Tekken outperforms the Llama 3 tokeniser in text compression for about 85% of all languages, potentially giving Mistral NeMo an edge in multilingual applications.

The model’s weights are now available on HuggingFace for both the base and instruct versions. Developers can start experimenting with Mistral NeMo using the mistral-inference tool and adapt it with mistral-finetune. For those using Mistral’s platform, the model is accessible under the name open-mistral-nemo.

In a nod to the collaboration with NVIDIA, Mistral NeMo is also packaged as an NVIDIA NIM inference microservice, available through ai.nvidia.com. This integration could streamline deployment for organisations already invested in NVIDIA’s AI ecosystem.

The release of Mistral NeMo represents a significant step forward in the democratisation of advanced AI models. By combining high performance, multilingual capabilities, and open-source availability, Mistral AI and NVIDIA are positioning this model as a versatile tool for a wide range of AI applications across various industries and research fields.

(Photo by David Clode)

See also: Meta joins Apple in withholding AI models from EU users

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mistral AI and NVIDIA unveil 12B NeMo model appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Mistral AI NVIDIA NeMo 大型语言模型 开源
相关文章