Mashable 02月05日
Are some AGI systems too risky to release? Meta says so.
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Meta公司CEO马克·扎克伯格发布了一份新的政策文件,暗示Meta可能会减缓或停止被认为是“高风险”或“关键风险”的AGI(通用人工智能)系统的开发。尽管扎克伯格曾承诺未来将公开AGI,但在“前沿AI框架”文件中,他承认某些能力极强的AI系统可能因风险过高而不会公开发布。该框架主要关注网络安全威胁以及化学和生物武器风险等关键领域,旨在保护国家安全的同时促进创新。如果Meta认为风险过高,将限制系统内部使用,而非向公众开放。

🛡️Meta发布“前沿AI框架”,强调对高风险AGI系统的潜在威胁进行评估和缓解,特别是网络安全以及化学和生物武器风险。

🔬该框架旨在识别与网络、化学和生物风险相关的潜在灾难性后果,并采取措施预防这些风险。

🎭Meta将进行威胁建模演习,以预测不同行为者可能如何滥用前沿AI来制造灾难性后果,并制定相应的应对措施,确保风险控制在可接受的范围内。

🔒如果Meta评估后认为AGI系统的风险过高,将不会允许公众访问,而是将其保留在内部使用。

It seems like since AI came into our world, creators have put a lead foot down on the gas. However, according to a new policy document, Meta CEO Mark Zuckerberg might slow or stop the development of AGI systems that are deemed "high risk" or "critical risk."

AGI is an AI system that can do anything a human can do, and Zuckerberg promised to make it openly available one day. But in the document "Frontier AI Framework," Zuckerberg concedes that some highly capable AI systems won't be released publicly because they could be too risky.

The framework "focuses on the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons."

"By prioritizing these areas, we can work to protect national security while promoting innovation. Our framework outlines a number of processes we follow to anticipate and mitigate risk when developing frontier AI systems," a press release about the document reads.

For example, the framework intends to identify "potential catastrophic outcomes related to cyber, chemical and biological risks that we strive to prevent." It also conducts "threat modeling exercises to anticipate how different actors might seek to misuse frontier AI to produce those catastrophic outcomes" and has "processes in place to keep risks within acceptable levels."

If the company determines that the risks are too high, it will keep the system internal instead of allowing public access.

"While the focus of this Framework is on our efforts to anticipate and mitigate risks of catastrophic outcomes, it is important to emphasize that the reason to develop advanced AI systems in the first place is because of the tremendous potential for benefits to society from those technologies," the document reads.

Yet, it looks like Zuckerberg’s hitting the brakes — at least for now — on AGI’s fast track to the future.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Meta AGI 人工智能风险 网络安全
相关文章