少点错误 2024年10月04日
LASR Labs Spring 2025 applications are open!
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

该计划是一个为期13周的AI安全研究项目,在伦敦进行。参与者将组成3 - 4人团队,在经验丰富的研究员指导下撰写技术论文。项目提供多种支持,包括津贴、食物、办公空间等。对申请人有一定技能要求,且有明确的时间表和申请流程。

💻该项目聚焦AI安全研究,旨在降低先进AI失控风险。参与者将在伦敦的LISA联合办公空间全职工作,会得到多种支持,如阅读小组、专家讲座、职业辅导等。

📄参与者需具备一定技能,如较强的定量能力、机器学习经验等。项目无特定经验要求,但成功申请者可能有相关研究或工作经历。

⏰项目时间线明确,申请截止日期为10月27日,12月中旬发送录取通知。参与者将在第0周学习评估技术项目,之后12周撰写并提交AI安全研究论文。

🌐该项目与其他类似项目有所不同,特别适合愿意深入学习多种项目、专注于撰写学术论文、喜欢团队合作的人。

Published on October 4, 2024 1:44 PM GMT

TLDR; apply by October 27th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.  

Apply to be a participant hereWe’re also looking for a programme manager, and you can read more about the role here.

London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs) is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models.

LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, OpenAI’s dangerous capabilities evals team, Leap Labs, and def/acc. Many more have continued working with their supervisors, doing independent research, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). All of the 2024 cohort’s groups have submitted papers to workshops or conferences.

Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo ResearchBluedot ImpactARENA, and the MATS extension programme. The office will host various guest sessions, talks, and networking events. 
 

Programme details: 

The programme will run from the 10th of February to the 9th of May (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.

In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their preferred projects, and we will match them into teams.

In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper). 

During the programme, flexible and comprehensive support will be available, including; 

 

Who should apply?

We are looking for applicants with the following skills: 

There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:

Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.

Note: this programme takes place in London. Participants without an existing right to work in the UK will be given support with visas. Please get in touch if you have any visa-related questions; erin[at]lasrlabs.org

 

Topics and supervisors: 

The supervisors for the Spring 2025 round will be announced by early December. Previous LASR groups have published on important areas in AI safety, focused on reducing risks from advanced AI. We’ve had supervisors from Apollo Research, Decode Research, and top UK universities. We have just released our research outputs from the Summer 2024 programme:

    [Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders [Paper] Hidden in Plain Text: Emergence and Mitigation of Steganographic Collusion in LLMs Evaluating Synthetic Activations composed of SAE Latents in GPT-2Characterizing stable regions in the residual stream of LLMs Honesty to Subterfuge: In-Context Reinforcement Learning Can Make Honest Models Reward Hack

In earlier rounds, participants have worked on projects relating to: the science of deep learning, multi-agent systems and collusion, theory of alignment in RLdeception in LLMsinterpretability probes and concept extrapolation

For Spring, we’re excited about a range of areas, including automated interpretability, scalable oversightcapability evals and AI control. If you’re interested in being a supervisor for the Spring programme, send us an email at erin[at]lasrlabs.org

 

Timeline: 

Application deadline: October 27th at 23:59 UK time (GMT+1)

Offers will be sent in mid-December, following a work test and interview. 

 

How is this different from other programmes? 

There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;

How did the programme go last time? 

In our feedback for the last round, the average likelihood to recommend LASR Labs was 9.25/10, and the NPS was +75. 

Testimonials from our recent cohort: 

“LASR gave me a lot of confidence to do productive research in the field, and helped me to realize that I am capable and competent. I learned a ton from working with a team of talented collaborators and having a supervisor who was very hands-on and made sure that we succeeded. I feel like my future work will be a lot more productive as a result of LASR!”

“Participating in the LASR Labs program has been an incredible experience and a key opportunity during the early stages of my career transition into AI safety. Erin and Charlie did a fantastic job of securing exceptional research mentors with well-scoped projects, connecting participants with necessary resources, and introducing key topics and ideas during the first week of the program. They created a friendly and helpful environment full of passionate and driven co-workers that I felt incredibly grateful to be a part of. Additionally, working within the LISA offices in London provided an invaluable sense of community, with an abundance of inspiring ideas, presentations, and future career opportunities. I now have a far deeper understanding of the state of AI safety, what it means to produce high-value research, and the engineering skills required.”

“I would highly recommend LASR Labs to anyone looking to move into AI Safety research. The program provides an excellent structure to facilitate upskilling in AI Safety and the production of a high-quality research output. The proposed projects are promising and well-scoped. Working in a team has been enjoyable and allows for faster progress on our research. The LISA offices are an exciting environment to work in. I've found the program highly engaging, feel I've improved as a researcher, and now intend to work full-time on AI safety research in the future.”



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全研究 伦敦项目 技术论文 研究支持
相关文章