TechCrunch News 2024年11月27日
Ai2 releases new language models competitive with Meta’s Llama
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

AI2发布OLMo 2,这是符合开源定义的语言模型系列。它使用公开数据和代码训练,有70亿和130亿参数的两种模型,能完成多种文本任务,性能可与Meta的Llama 3.1竞争,模型及组件可从官网下载并可商用。

🥇OLMo 2符合开源AI定义,工具和数据公开

💪有70亿和130亿参数的两种模型,性能佳

📝能执行多种文本任务,如问答、总结等

🛠️模型及组件可从官网下载,可商业使用

There’s a new AI model family on the block, and it’s one of the few that can be reproduced from scratch.

On Tuesday, Ai2, the nonprofit AI research organization founded by the late Paul Allen, released OLMo 2, the second family of models in its OLMo series. (OLMo’s short for “Open Language Model.”) While there’s no shortage of “open” language models to choose from (see: Meta’s Llama), OLMo 2 meets the Open Source Initiative’s definition of open source AI, meaning the tools and data used to develop it are publicly available.

The Open Source Initiative, the long-running institution aiming to define and “steward” all things open source, finalized its open source AI definition in October. But the first OLMo models, released in February, met the criterion as well.

“OLMo 2 [was] developed start-to-finish with open and accessible training data, open-source training code, reproducible training recipes, transparent evaluations, intermediate checkpoints, and more,” AI2 wrote in a blog post. “By openly sharing our data, recipes, and findings, we hope to provide the open-source community with the resources needed to discover new and innovative approaches.”

There’s two models in the OLMo 2 family: one with 7 billion parameters (OLMo 7B) and one with 13 billion parameters (OLMo 13B). Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.

Like most language models, OLMo 2 7B and 13B can perform a range of text-based tasks, like answering questions, summarizing documents, and writing code.

To train the models, Ai2 used a data set of 5 trillion tokens. Tokens represent bits of raw data; 1 million tokens is equal to about 750,000 words. The training set included websites “filtered for high quality,” academic papers, Q&A discussion boards, and math workbooks “both synthetic and human generated.”

Ai2 claims the result is models that are competitive, performance-wise, with open models like Meta’s Llama 3.1 release.

Image Credits:Ai2

“Not only do we observe a dramatic improvement in performance across all tasks compared to our earlier OLMo model but, notably, OLMo 2 7B outperforms LLama 3.1 8B,” Ai2 writes. “OLMo 2 [represents] the best fully-open language models to date.”

The OLMo 2 models and all of their components can be downloaded from Ai2’s website. They’re under Apache 2.0 license, meaning they can be used commercially.

There’s been some debate recently over the safety of open models, what with Llama models reportedly being used by Chinese researchers to develop defense tools. When I asked Ai2 engineer Dirk Groeneveld in February whether he was concerned about OLMo being abused, he told me that he believes the benefits ultimately outweigh the harms.

“Yes, it’s possible open models may be used inappropriately or for unintended purposes,” he said. “[However, this] approach also promotes technical advancements that lead to more ethical models; is a prerequisite for verification and reproducibility, as these can only be achieved with access to the full stack; and reduces a growing concentration of power, creating more equitable access.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OLMo 2 开源语言模型 AI2 文本任务
相关文章