少点错误 03月27日
Center on Long-Term Risk: Summer Research Fellowship 2025 - Apply Now
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

CLR(长期风险中心)正在招聘夏季研究员,专注于探索减少长期未来痛苦(s-risk)的策略,并研究相关的AI安全技术。为期八周的实习项目将让研究员与团队紧密合作,进行独立的研究项目,并接受经验丰富的导师指导。该项目特别关注对s-risk相关的实证AI安全工作感兴趣的申请者。此外,CLR正在进行研究重点的战略调整,未来将主要集中于s-risk驱动的实证AI安全研究,包括人物/角色、多智能体动态和AI策略研究等方向。

🧠CLR夏季研究员项目旨在探索减少长期未来痛苦(s-risks)的策略,并研究相关的AI安全技术。

🗓️项目为期八周,研究员将与CLR团队合作,进行独立研究项目,并接受经验丰富的导师指导。

🔍CLR特别关注对s-risk相关的实证AI安全工作感兴趣的申请者,并简化了申请流程。

💡CLR的研究重点正在进行战略调整,未来将主要集中于s-risk驱动的实证AI安全研究,包括人物/角色、多智能体动态和AI策略研究等方向。

📝申请截止日期为4月15日23:59 PT,CLR预计将提供2-4个研究员职位,并为有兴趣的申请人提供永久研究职位的信息。

Published on March 26, 2025 5:29 PM GMT

Summary: CLR is hiring for our Summer Research Fellowship. Join us for eight weeks to work on s-risk motivated empirical AI safety research. Apply here by Tuesday 15th April 23:59 PT.


We, the Center on Long-Term Risk, are looking for Summer Research Fellows to explore strategies for reducing suffering in the long-term future (s-risks) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, you will be in regular contact with our researchers and other fellows, and receive guidance from an experienced mentor.

You will work on challenging research questions relevant to reducing suffering. You will be integrated and collaborate with our team of intellectually curious, hard-working, and caring people, all of whom share a profound drive to make the biggest difference they can.

While this iteration retains the basic structure of previous rounds, there are several key differences:

Apply here by Tuesday 15th April 23:59 PT.

We're also preparing to hire for permanent research positions soon. If you'd like to stay informed, sign up for our mailing list on our website. We also encourage those interested in permanent positions to apply for the Summer Research Fellowship. 

Further details on the fellowship here

  1. ^

    We are currently undergoing a strategic shift in our research priorities. Moving forward, the majority of our work will focus on s-risk-motivated empirical AI safety research in the following areas:

      Personas/characters – How do models develop different personas or preferences? What are the most plausible training stories by which models develop malevolent or otherwise undesirable personalities? What preferences will misaligned models have by default, and what affordances will developers have to influence those preferences even if alignment does not succeed?Multi-agent dynamics – How do models behave in extended multi-agent interactions, especially adversarial interactions where agents have conflicting goals? How well can we predict the behaviour models in extended multi-agent interactions from their behaviour on shorter and cheaper evals (e.g., single-turn evals)?AI for strategy research – How can (future) AI assistants meaningfully contribute to macrostrategy research or other forms of non-empirical research? How could we verify that AI assistants were producing high-quality macrostrategy research?

    For more details on our theory of change and our general approach to empirical s-risk research, please see our measurement agenda (although note our focus has somewhat narrowed since publishing that).

    We will also continue to explore s-risk macrostrategy, with a particular focus on understanding when and how interventions in AI development can robustly reduce s-risk. While we may accept some summer research fellows to work on this area, we expect most fellows to focus on the empirical research agenda.
     



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

CLR AI安全 s-risk 夏季研究员 长期风险
相关文章