TechCrunch News 02月04日
Meta says it may stop development of AI systems it deems too risky
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Meta公司在追求通用人工智能(AGI)的道路上,发布了一份新的政策文件,即《前沿AI框架》。该框架明确指出,Meta将根据风险等级来决定是否公开发布其内部开发的强大AI系统。框架将AI系统分为“高风险”和“关键风险”两类,前者可能使攻击更容易实施,而后者可能导致无法缓解的灾难性后果。Meta强调,风险评估并非基于单一的经验测试,而是由内部和外部研究人员的意见以及高层决策者的审查共同决定。对于高风险系统,Meta将限制内部访问并在风险降低前不予发布;对于关键风险系统,则将采取安全措施防止泄露并暂停开发。此框架是对Meta开放AI策略的回应,旨在在技术进步的同时,平衡其潜在风险。

⚠️Meta将AI系统分为“高风险”和“关键风险”两类,前者可能使网络安全、化学和生物攻击更容易实施,后者可能导致无法缓解的灾难性后果。

🔬Meta的风险评估并非基于单一的经验测试,而是由内部和外部研究人员的意见以及高层决策者的审查共同决定,因为Meta认为目前的评估科学不足以提供明确的定量指标。

🔒对于高风险系统,Meta将限制内部访问并在实施风险缓解措施后才考虑发布;对于关键风险系统,Meta将采取安全措施防止系统泄露,并暂停开发直到风险降低。

⚖️Meta的《前沿AI框架》是对其开放AI策略的回应,旨在平衡技术进步的益处和潜在风险,并确保AI技术在造福社会的同时维持适当的风险水平。

Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) — which is roughly defined as AI that can accomplish any task a human can — openly available one day. But in a new policy document, Meta suggests that there are certain scenarios in which it may not release a highly capable AI system it developed internally.

The document, which Meta is calling its Frontier AI Framework, identifies two types of AI systems the company considers too risky to release: “high risk” and “critical risk” systems.

As Meta defines them, both “high-risk” and “critical-risk” systems are capable of aiding in cybersecurity, chemical, and biological attacks, the difference being that “critical-risk” systems could result in a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.” High-risk systems, by contrast, might make an attack easier to carry out but not as reliably or dependably as a critical risk system.

Which sort of attacks are we talking about here? Meta gives a few examples, like the “automated end-to-end compromise of a best-practice-protected corporate-scale environment” and the “proliferation of high-impact biological weapons.” The list of possible catastrophes in Meta’s document is far from exhaustive, the company acknowledges, but includes those that Meta believes to be “the most urgent” and plausible to arise as a direct result of releasing a powerful AI system.

Somewhat surprising is that, according to the document, Meta classifies system risk not based on any one empirical test but informed by the input of internal and external researchers who are subject to review by “senior-level decision-makers.” Why? Meta says that it doesn’t believe the science of evaluation is “sufficiently robust as to provide definitive quantitative metrics” for deciding a system’s riskiness.

If Meta determines a system is high-risk, the company says it will limit access to the system internally and won’t release it until it implements mitigations to “reduce risk to moderate levels.” If, on the other hand, a system is deemed critical-risk, Meta says it will implement unspecified security protections to prevent the system from being exfiltrated and stop development until the system can be made less dangerous.

Meta’s Frontier AI Framework, which the company says will evolve with the changing AI landscape, appears to be a response to criticism of the company’s “open” approach to system development. Meta has embraced a strategy of making its AI technology openly available — albeit not open source by the commonly understood definition — in contrast to companies like OpenAI that opt to gate their systems behind an API.

For Meta, the open release approach has proven to be a blessing and a curse. The company’s family of AI models, called Llama, has racked up hundreds of millions downloads. But Llama has also reportedly been used by at least one U.S. adversary to develop a defense chatbot.

In publishing its Frontier AI Framework, Meta may also be aiming to contrast its open AI strategy with Chinese AI firm DeepSeek’s. DeepSeek also makes its systems openly available. But the company’s AI has few safeguards and can be easily steered to generate toxic and harmful outputs.

“[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI,” Meta writes in the document, “it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Meta AGI AI风险 开放AI 前沿AI框架
相关文章