TechCrunch News 04月17日 05:21
OpenAI’s latest AI models have a new safeguard to prevent biorisks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI 推出新系统,旨在监控其最新的 AI 模型 o3 和 o4-mini,以防止它们提供有关生物和化学威胁的有害建议。该系统是 OpenAI 安全报告的一部分,旨在降低模型被用于实施潜在危险攻击的风险。o3 和 o4-mini 在能力上较之前的模型有所提升,因此带来了新的风险。为了应对这些风险,OpenAI 创建了新的监控系统,该系统基于 OpenAI 的内容策略进行定制训练,并在 o3 和 o4-mini 上运行。测试结果显示,该系统在拒绝响应风险提示方面表现出色。OpenAI 承认测试未能完全模拟用户绕过监控的情况,因此将继续依赖人工监控。

🔬 OpenAI 部署新系统监控 o3 和 o4-mini 模型,重点关注生物和化学威胁相关的提示,以防止模型提供有害建议。

🛡️ 该监控系统是 OpenAI 安全策略的一部分,旨在降低模型被恶意用户利用的风险,其运行在 o3 和 o4-mini 模型之上。

✅ 为了建立基准,OpenAI 的红队花费约 1000 小时标记 o3 和 o4-mini 中与生物风险相关的“不安全”对话。

🚫 在测试中,该系统模拟了安全监控的“阻止逻辑”,模型拒绝响应风险提示的比例高达 98.7%。

⚠️ OpenAI 承认测试未能完全模拟用户绕过监控的情况,因此将继续依赖人工监控,并积极追踪其模型可能带来的潜在风险。

OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report.

O3 and o4-mini represent a meaningful capability increase over OpenAI’s previous models, the company says, and thus pose new risks in the hands of bad actors. According to OpenAI’s internal benchmarks, o3 is more skilled at answering questions around creating certain types of biological threats in particular. For this reason — and to mitigate other risks — OpenAI created the new monitoring system, which the company describes as a “safety-focused reasoning monitor.”

The monitor, custom-trained to reason about OpenAI’s content policies, runs on top of o3 and o4-mini. It’s designed to identify prompts related to biological and chemical risk and instruct the models to refuse to offer advice on those topics.

To establish a baseline, OpenAI had red teamers spend around 1,000 hours flagging “unsafe” biorisk-related conversations from o3 and o4-mini. During a test in which OpenAI simulated the “blocking logic” of its safety monitor, the models declined to respond to risky prompts 98.7% of the time, according to OpenAI.

OpenAI acknowledges that its test didn’t account for people who might try new prompts after getting blocked by the monitor, which is why the company says it’ll continue to rely in part on human monitoring.

O3 and o4-mini don’t cross OpenAI’s “high risk” threshold for biorisks, according to the company. However, compared to o1 and GPT-4, OpenAI says that early versions of o3 and o4-mini proved more helpful at answering questions around developing biological weapons.

Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

The company is actively tracking how its models could make it easier for malicious users to develop chemical and biological threats, according to OpenAI’s recently updated Preparedness Framework.

OpenAI is increasingly relying on automated systems to mitigate the risks from its models. For example, to prevent GPT-4o’s native image generator from creating child sexual abuse material (CSAM), OpenAI says it uses on a reasoning monitor similar to the one the company deployed for o3 and o4-mini.

Yet several researchers have raised concerns OpenAI isn’t prioritizing safety as much as it should. One of the company’s red-teaming partners, Metr, said it had relatively little time to test o3 on a benchmark for deceptive behavior. Meanwhile, OpenAI decided not to release a safety report for its GPT-4.1 model, which launched earlier this week.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI安全 模型监控 生物化学威胁
相关文章