Fortune | FORTUNE 07月18日 22:09
OpenAI warns that its new ChatGPT Agent has the ability to aid dangerous bioweapon development
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI最新推出的ChatGPT Agent是一款能够代表用户执行多项任务的AI工具,涵盖数据收集、表格创建、旅行预订和演示文稿制作等。然而,该工具也被OpenAI列为具有“高”生物风险能力的产品,可能为缺乏专业知识的个人提供制造生物或化学威胁的协助。OpenAI对此采取了预防性措施,包括额外的安全防护和严格的监控。尽管存在潜在风险,但这类AI Agent也被视为AI发展的重要方向,并可能在医学领域带来突破。OpenAI强调用户控制以降低风险,确保用户可在任何时候暂停、重定向或停止Agent的操作。

🎯 **ChatGPT Agent赋能多任务处理**: OpenAI新推出的ChatGPT Agent是一款强大的AI工具,能够像个人助理一样,代表用户自动完成一系列复杂任务,例如收集数据、创建电子表格、预订旅行以及制作演示文稿等。该工具通过虚拟计算机操作,可以控制网页浏览器、与文件交互,并在各种应用程序(如电子表格和演示文稿软件)之间无缝切换,极大地提高了工作效率和便利性。

⚠️ **生物风险的潜在威胁**: OpenAI将ChatGPT Agent归类为具有“高”生物风险能力的产品,这意味着它可能协助缺乏专业知识的“新手”级别用户制造已知的生物或化学威胁。OpenAI的评估表明,与核威胁不同,生物威胁的材料获取门槛较低,知识和实验室技能的稀缺性是关键的限制因素。未经过充分缓解的ChatGPT Agent可能显著缩小这一知识差距,提供接近专家水平的建议,增加生物恐怖事件发生的可能性。

🛡️ **OpenAI的风险缓解措施**: 面对生物风险的潜在威胁,OpenAI采取了预防性方法,并为此工具部署了额外的安全防护措施。这些措施包括:拒绝可能用于制造生物武器的提示,将潜在不安全请求标记以便专家审查,严格阻止风险内容,加快问题响应速度,以及对任何滥用迹象进行强健的监控。此外,OpenAI还强调了用户控制的重要性,允许用户在Agent执行重大操作前请求许可,并随时暂停、重定向或停止Agent。

🚀 **AI Agent的行业竞争与应用前景**: AI Agent是当前AI领域的热门且高风险的研发方向,OpenAI的ChatGPT Agent紧随Google和Anthropic等公司的类似发布。大型科技公司将AI Agent视为重要的商业机会,因为企业越来越倾向于将AI集成到工作流程中以实现任务自动化。同时,AI Agent的能力也可能在生命科学领域带来突破,例如加速医学研究和发现,这使得风险缓解与技术进步之间的平衡成为关键挑战。

OpenAI’s newest product promises to make it easier for someone to automatically gather data, create spreadsheets, book travel, spin up slide decks—and, just maybe, build a biological weapon. ChatGPT Agent, a new agentic AI tool that can take action on a user’s behalf, is the first product OpenAI has classified as having a “high” capability for biorisk.

This means the model can provide meaningful assistance to “novice” actors and enable them to create known biological or chemical threats. The real-world implications of this could mean that biological or chemical terror events by non-state actors become more likely and frequent, according to OpenAI’s “Preparedness Framework,” which the company uses to track and prepare for new risks of severe harm from its frontier models.

“Some might think that biorisk is not real, and models only provide information that could be found via search. That may have been true in 2024 but is definitely not true today. Based our evaluations and those of our experts, the risk is very real,” Boaz Barak, a member of the technical staff at OpenAI, said in a social media post.

“While we can’t say for sure that this model can enable a novice to create severe biological harm, I believe it would have been deeply irresponsible to release this model without comprehensive mitigations such as the one we have put in place,” he added.

OpenAI said that classing the model as high risk for bio-misuse was a “precautionary approach,” and one that had triggered extra safeguards for the tool.

Keren Gu, a safety researcher at OpenAI, said that while the company did not have definitive evidence that the model could meaningfully guide a novice to create something of severe biological harm, it had activated safeguards nonetheless. These safeguards include having ChatGPT Agent refuse prompts that could potentially be intended to help someone produce a bioweapon, systems that flag potentially unsafe requests for expert review, strict rules that block risky content, quicker responses to problems, and robust monitoring for any signs of misuse.

One of the key challenges in mitigating the potential for biorisk is that the same capabilities could unlock life-saving medical breakthroughs, one of the big promises for advanced AI models.

The company has become increasingly concerned about the potential for model misuse in biological weapon development. In a blog post last month, OpenAI announced it was ramping up safety testing to reduce the risk of its models being used to aid in the creation of biological weapons. The AI lab warned that without these precautions, the models could soon enable “novice uplift”—helping individuals with little scientific background develop dangerous weapons.

“Unlike Nuclear and Radiological threats, obtaining materials is less of a barrier for creating bio threats and hence security depends to greater extent on scarcity of knowledge and lab skills,” Barak said. “Based on our evaluations and external experts, an unmitigated ChatGPT Agent could narrow that knowledge gap and offer advice closer to a subject matter expert.”

ChatGPT Agent

OpenAI’s new ChatGPT feature is an attempt to cash in on one of the buzziest, and most risky, areas of AI development: agents.

The new feature functions like a personal assistant, capable of handling tasks such as booking restaurant reservations, online shopping, and organizing job candidate lists. Unlike previous versions, the tool can use a virtual computer to actively control web browsers, interact with files, and navigate across apps like spreadsheets and slide decks.

The company merged the teams behind Operator, its first AI agent, and Deep Research, a tool developed to conduct multi-step online research for complex tasks, to form a single group that developed the new tool.

AI labs are currently racing to build agents that can manage complex digital tasks independently, and the launch follows similar releases by Google and Anthropic. Big Tech companies see AI agents as a commercial opportunity, as companies are increasingly moving to implement AI into workflows and automate certain tasks.

OpenAI has acknowledged that greater autonomy introduces more risk and is emphasizing user control to mitigate these risks. For example, the agent asks for permission before taking significant action and can be paused, redirected, or stopped by the user at any time.

Introducing the 2025 Fortune 500

, the definitive ranking of the biggest companies in America. 

Explore this year's list.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT Agent OpenAI AI安全 生物风险 AI Agent
相关文章