少点错误 04月02日
LASR Labs Summer 2025 applications are open!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

该项目为AI安全研究计划,为期13周,在伦敦进行。参与者将组成3 - 4人团队,在导师指导下写技术论文,可获得津贴并享受多种支持。4月26日截止申请,6月发录取通知。

🎯项目聚焦AI安全,旨在降低高级AI失控风险

💻参与者在伦敦全职工作,团队协作写论文

💰提供11000英镑津贴,包含多种支持与活动

📅4月26日申请截止,6月发录取通知

Published on April 2, 2025 1:32 PM GMT

TLDR; apply by April 26th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.  

London AI Safety Research (LASR) Labs is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models.

 

LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, Apollo, OpenAI dangerous capabilities evaluations team, and Open Philanthropy. Many more have continued working with their supervisors, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; all 5 papers in 2024 were accepted to workshops at NeurIPS, and four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). 

 

Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo ResearchBluedot ImpactARENA, and MATS extension programme. The office will host various guest sessions, talks, and networking events. 
 

Programme details: 

The programme will run from the 28th of July to the 24th of October (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.

In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their preferred projects, and we will match them into teams.

In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper). 

During the programme, flexible and comprehensive support will be available, including; 

 

Who should apply?

We are looking for applicants with the following skills: 

For more detail on how we think about and measure technical and research ability, refer to “tips for empirical alignment research” by Ethan Perez, which outlines in detail the specific skills valued within an AI safety research environment. 

There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:

Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.

Note: this programme takes place in London. Participants without an existing right to work in the UK will be given support with visas. Please get in touch if you have any visa-related questions; erin[at]lasrlabs.org
 

Topics and supervisors: 

The supervisors for the Summer 2025 round will be announced in the next couple of months. Previous LASR groups have published on important areas in AI safety, focused on reducing risks from advanced AI. We’ve had supervisors from Google Deepmind, UK AI Security Institute, and top UK universities. These are our outputs from the Summer 2024 programme:

    [Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders [Paper] Hidden in Plain Text: Emergence and Mitigation of Steganographic Collusion in LLMs Evaluating Synthetic Activations composed of SAE Latents in GPT-2Characterizing stable regions in the residual stream of LLMs Honesty to Subterfuge: In-Context Reinforcement Learning Can Make Honest Models Reward Hack

 

Timeline: 

Application deadline: April 26th at 23:59 BST (GMT+1)

Offers will be sent in early June, following a skills assessment and an interview. 

How is this different from other programmes? 

There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship, and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 研究项目 伦敦 团队协作
相关文章