TechCrunch News 04月06日
Meta releases Llama 4, a new crop of flagship AI models
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Meta公司推出了其Llama系列AI模型的最新成员——Llama 4,包括Scout、Maverick和Behemoth三款模型。这些模型基于大量未标记的文本、图像和视频数据进行训练,旨在提供广泛的视觉理解能力。Llama 4采用了混合专家(MoE)架构,在计算效率上有所提升。其中,Maverick在通用助手和聊天场景中表现出色,部分指标超越了GPT-4o和Gemini 2.0。Scout则擅长文档摘要和大型代码库推理,拥有1000万tokens的超大上下文窗口。Meta还对Llama 4进行了调整,使其更少地拒绝回答有争议的问题,并力求在回应各种观点时保持平衡。

🤖 Llama 4系列模型包括Scout、Maverick和Behemoth,均基于大规模未标记数据训练,旨在提升视觉理解能力。

💡 Llama 4采用了混合专家(MoE)架构,通过将任务分解为子任务并分配给专门的“专家”模型,提高了计算效率。

💻 Maverick在通用助手和聊天场景中表现出色,部分基准测试超越了GPT-4o和Gemini 2.0;Scout则擅长文档摘要和大型代码库推理,拥有1000万tokens的超大上下文窗口。

⚖️ Meta对Llama 4进行了调整,使其更少地拒绝回答有争议的问题,并力求在回应各种观点时保持平衡。

Meta has released a new collection of AI models, Llama 4, in its Llama family — on a Saturday, no less.

There are four new models in total: Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. All were trained on “large amounts of unlabeled text, image, and video data” to give them “broad visual understanding,” Meta says.

The success of open models from Chinese AI lab DeepSeek, which perform on par or better than Meta’s previous flagship Llama models, reportedly kicked Llama development into overdrive. Meta is said to have scrambled war rooms to decipher how DeepSeek lowered the cost of running and deploying models like R1 and V3.

Scout and Maverick are openly available on Llama.com and from Meta’s partners, including the AI dev platform Hugging Face, while Behemoth is still in training. Meta says that Meta AI, its AI-powered assistant across apps including WhatsApp, Messenger, and Instagram, has been updated to use Llama 4 in 40 countries. Multimodal features are limited to the U.S. in English for now.

Some developers may take issue with the Llama 4 license.

Users and companies “domiciled” or with a “principal place of business” in the EU are prohibited from using or distributing the models, likely the result of governance requirements imposed by the region’s AI and data privacy laws. (In the past, Meta has decried these laws as overly burdensome.) In addition, as with previous Llama releases, companies with more than 700 million monthly active users must request a special license from Meta, which Meta can grant or deny at its sole discretion.

“These Llama 4 models mark the beginning of a new era for the Llama ecosystem,” Meta wrote in a blog post. “This is just the beginning for the Llama 4 collection.”

Image Credits:Meta

Meta says that Llama 4 is its first cohort of models to use a mixture of experts (MoE) architecture, which is more computationally efficient for training and answering queries. MoE architectures basically break down data processing tasks into subtasks and then delegate them to smaller, specialized “expert” models. 

Maverick, for example, has 400 billion total parameters, but only 17 billion active parameters across 128 “experts.” (Parameters roughly correspond to a model’s problem-solving skills.) Scout has 17 billion active parameters, 16 experts, and 109 billion total parameters.

According to Meta’s internal testing, Maverick, which the company says is best for “general assistant and chat” use cases like creative writing, exceeds models such as OpenAI’s GPT-4o and Google’s Gemini 2.0 on certain coding, reasoning, multilingual, long-context, and image benchmarks. However, Maverick doesn’t quite measure up to more capable recent models like Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and OpenAI’s GPT-4.5.

Scout’s strengths lie in tasks like document summarization and reasoning over large codebases. Uniquely, it has a very large context window: 10 million tokens. (“Tokens” represent bits of raw text — e.g., the word “fantastic” split into “fan,” “tas” and “tic.”) In plain English, Scout can take in images and up to millions of words, allowing it to process and work with extremely large documents.

Scout can run on a single Nvidia H100 GPU, while Maverick requires an Nvidia H100 DGX system, according to Meta.

Meta’s unreleased Behemoth will need even beefier hardware. According to the company, Behemoth has 288 billion active parameters, 16 experts, and nearly two trillion total parameters. Meta’s internal benchmarking has Behemoth outperforming GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro (but not 2.5 Pro) on several evaluations measuring STEM skills like math problem solving.

Of note, none of the Llama 4 models is a proper “reasoning” model along the lines of OpenAI’s o1 and o3-mini. Reasoning models fact-check their answers and generally respond to questions more reliably, but as a consequence take longer than traditional, “non-reasoning” models to deliver answers.

Interestingly, Meta says that it tuned all of its Llama 4 models to refuse to answer “contentious” questions less often. According to the company, Llama 4 responds to “debated” political and social topics that the previous crop of Llama models wouldn’t. In addition, the company says, Llama 4 is “dramatically more balanced” with which prompts it flat-out won’t entertain.

“[Y]ou can count on [Lllama 4] to provide helpful, factual responses without judgment,” a Meta spokesperson told TechCrunch. “[W]e’re continuing to make Llama more responsive so that it answers more questions, can respond to a variety of different viewpoints […] and doesn’t favor some views over others.”

Those tweaks come as White House allies accuse AI of political wokeness.

Many of President Donald Trump’s close confidants, including Elon Musk and crypto and AI “czar” David Sacks, have alleged that many AI chatbots censor conservative viewpoints. Sacks has historically singled out OpenAI’s ChatGPT in particular as “programmed to be woke” and untruthful about politically sensitive subjects.

In truth, bias in AI is an intractable technical problem. Musk’s own AI company, xAI, has struggled to create a chatbot that doesn’t endorse some political views over others.

That hasn’t stopped companies including OpenAI from adjusting their AI models to answer more questions than they would have previously, in particular questions on controversial political subjects.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Llama 4 Meta AI模型 混合专家架构 多模态
相关文章