TechCrunch News 2024年11月09日
OpenAI loses another lead safety researcher, Lilian Weng
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI的首席安全研究员Lilian Weng宣布将于11月15日离职,她曾负责多项工作。OpenAI近年有多位人员离职,多人指责其重商业产品轻AI安全。OpenAI表示会安排人员接替Weng,其安全系统单元有众多专家,但该公司的安全问题仍受关注。

🌐Lilian Weng在OpenAI工作7年后决定离职,11月15日为最后工作日。

📄多人指责OpenAI重商业产品轻AI安全,多位人员相继离职。

💪OpenAI称会安排人员接替Weng,其安全系统单元有80多专家。

Another one of OpenAI’s lead safety researchers, Lilian Weng, announced on Friday she is departing the startup. Weng served as VP of research and safety since August, and before that, was the head of the OpenAI’s safety systems team.

In a post on X, Weng said that “after 7 years at OpenAI, I feel ready to reset and explore something new.” Weng said her last day will be November 15th, but did not specify where she will go next.

“I made the extremely difficult decision to leave OpenAI,” said Weng in the post. “Looking at what we have achieved, I’m so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving.”

Weng’s departure marks the latest in a long string of AI safety researchers, policy researchers, and other executives who have exited the company in the last year, and several have accused OpenAI of prioritizing commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike – the leaders of OpenAI’s now dissolved Superalignment team, which tried to develop methods to steer superintelligent AI systems – who also left the startup this year to work on AI safety elsewhere.

Weng first joined OpenAI in 2018, according to her LinkedIn, working on the startup’s robotics team that ended up building a robot hand that could solve a Rubik’s cube – a task that took two years to achieve, according to her post.

As OpenAI started focusing more on the GPT paradigm, so did Weng. The researcher transitioned to help build the startup’s applied AI research team in 2021. Following the launch of GPT-4, Weng was tasked with creating a dedicated team to build safety systems for the startup in 2023. Today, OpenAI’s safety systems unit has more than 80 scientists, researchers, and policy experts, according to Weng’s post.

That’s a lot of AI safety folks, but many have raised concerns around OpenAI’s focus on safety as it tries to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was dissolving its AGI readiness team, which he had advised. On the same day, the New York Times profiled a former OpenAI researcher, Suchir Balaji, who said he left OpenAI because he thought the startup’s technology would bring more harm than benefit to society.

OpenAI tells TechCrunch that executives and safety researchers are working on a transition to replace Weng.

“We deeply appreciate Lilian’s contributions to breakthrough safety research and building rigorous technical safeguards,” said an OpenAI spokesperson in an emailed statement. “We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions of people globally.”

Other executives who have left OpenAI in recent months include CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph. In August, the prominent researcher Andrej Karpathy and co-founder John Schulman also announced they’d be leaving the startup. Some of these folks, including Leike and Schulman, left to join an OpenAI competitor, Anthropic, while others have gone on to start their own ventures.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI Lilian Weng AI安全 人员离职
相关文章