The Verge - Artificial Intelligences 2024年10月25日
Departing OpenAI leader says no company is ready for AGI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI的高级顾问Miles Brundage宣布离职,称无人为通用人工智能做好准备,包括OpenAI本身。他的离开是OpenAI安全团队一系列高调离职中的最新一例,凸显了公司原始使命与商业野心间的紧张关系。公司面临转型压力,且Brundage表示在公司研究和出版自由受限,他认为在外部能对全球AI治理产生更大影响,但OpenAI表示会支持他未来的工作。

🥺OpenAI的高级顾问Miles Brundage指出OpenAI及其他前沿实验室都未做好迎接通用人工智能的准备,世界同样如此。他在OpenAI工作六年,助力塑造公司的AI安全举措。

😔OpenAI的“AGI Readiness”团队解散,此前其“Superalignment”团队也已解散,凸显公司原始使命与商业野心的矛盾。公司面临从非盈利向盈利转型的压力,这一转变长期令Brundage担忧。

😕Brundage称在OpenAI的研究和出版自由受限,他强调AI政策讨论需要独立声音,不受行业偏见和利益冲突影响。他认为离开OpenAI后能在全球AI治理中发挥更大作用。

🤝尽管存在摩擦,OpenAI表示将为Brundage的未来工作提供无附加条件的资金、API信用和早期模型访问支持。

Image: The Verge

Miles Brundage, OpenAI’s senior adviser for the readiness of AGI (aka human-level artificial intelligence), delivered a stark warning as he announced his departure on Wednesday: no one is prepared for artificial general intelligence, including OpenAI itself.

“Neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready,” wrote Brundage, who spent six years helping to shape the company’s AI safety initiatives. “To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time.”

His exit marks the latest in a series of high-profile departures from OpenAI’s safety teams. Jan Leike, a prominent researcher, left after claiming that “safety culture and processes have taken a backseat to shiny products.” Cofounder Ilya Sutskever also departed to launch his own AI startup focused on safe AGI development.

The dissolution of Brundage’s “AGI Readiness” team, coming just months after the company disbanded its “Superalignment” team dedicated to long-term AI risk mitigation, highlights mounting tensions between OpenAI’s original mission and its commercial ambitions. The company reportedly faces pressure to transition from a nonprofit to a for-profit public benefit corporation within two years — or risk returning funds from its recent $6.6 billion investment round. This shift toward commercialization has long concerned Brundage, who expressed reservations back in 2019 when OpenAI first established its for-profit division.

In explaining his departure, Brundage cited increasing constraints on his research and publication freedom at the high-profile company. He emphasized the need for independent voices in AI policy discussions, free from industry biases and conflicts of interest. Having advised OpenAI’s leadership on internal preparedness, he believes he can now make a greater impact on global AI governance from outside of the organization.

This departure may also reflect a deeper cultural divide within OpenAI. Many researchers joined to advance AI research and now find themselves in an increasingly product-driven environment. Internal resource allocation has become a flashpoint — reports indicate that Leike’s team was denied computing power for safety research before its eventual dissolution.

Despite these frictions, Brundage noted that OpenAI has offered to support his future work with funding, API credits, and early model access, with no strings attached.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI 通用人工智能 AI安全 商业转型
相关文章