少点错误 2024年11月05日
ML4Good (AI Safety Bootcamp) - Experience report
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文分享了作者参加2024年9月ML4Good UK人工智能安全训练营的体验。训练营为期10天,免费提供食宿,涵盖了人工智能安全领域的众多技术和概念,例如梯度下降、Transformer、强化学习等。训练营旨在让参与者对人工智能安全领域有广泛的了解,并找到自己感兴趣的研究方向。作者认为,该训练营适合对人工智能安全领域感兴趣但尚未确定研究方向的人,以及希望建立更全面的人工智能安全认知的人。

🤔 **训练营概况:** 训练营为期10天,免费提供食宿,涵盖了梯度下降、Transformer、对抗攻击、强化学习基础、强化学习与人类反馈、评估和机械解释性等技术内容,以及时间线、威胁模型、人工智能系统风险、对齐和人工智能治理的解决方案等概念内容。

📅 **课程安排:** 课程安排密集,包括技术和概念讲座、Jupyter Notebook实践、研讨会、讨论和反馈环节等,旨在帮助参与者快速了解人工智能安全领域的多方面内容。

🤝 **社区氛围:** 训练营拥有良好的社区氛围,参与者来自不同背景,大家积极互动,共同学习和探索,例如在课余时间一起游泳、玩音乐或游戏,营造了良好的学习环境。

🎯 **训练营目标:** 训练营旨在帮助参与者对人工智能安全领域有更广泛的了解,并找到自己感兴趣的研究方向,而不是培养特定领域的专家。

💡 **个人收获:** 作者通过训练营接触了更多人工智能安全领域的人才,学习了新的技术和概念,并对自己的职业规划有了更清晰的认识。

Published on November 5, 2024 1:18 AM GMT

Introduction

This is a short summary of my experience attending the ML4Good UK bootcamp in September 2024. There are 2 previous experience reports I link to at the bottom, but because the program is refined each time, I wanted to describe my experience and add my two cents. This is useful for you if you are contemplating applying for the camp, or if you want to learn about AI Safety field building efforts. For context, I studied computer science, have been working as a software engineer for a few years and have had a hobby interest in AI safety for about 2 years (e.g., I did the BlueDot impact AI safety fundamentals course).

Overview of the program

The bootcamp is free (including room and board) , and happens over 10 days at CEEALAR in the UK. We had participants from all over Europe from multiple backgrounds, with most people about to finish or just having finished their degrees. Majors skewed towards computer science/maths-y degrees, but there were plenty of exceptions and any background is welcome. Compared to previous iterations, the program density was somewhat reduced. Our courses ran from 9am-7:30pm, and usually looked something like this:

TimeActivity
9:00-11:00Lectures, usually 1 technical + 1 conceptual
11:00-11:30Break
11:30-13:00Work on Jupyter Notebooks in pairs or alone, applying the lecture content
12:00-14:00Lunch+Break
14:00-15:00Lecture, technical or conceptual
15:00-16:30Workshop applying the lecture contents, doing our own reading/research
16:30-17:00Break
17:00-19:30Discussion about certain AI safety topics, Q&A with TAs, events, feedback on the day
19:30-20:30Dinner

Although attendance for each session was voluntary, nearly everyone chose to participate in all camp sessions. We covered a wide range of topics. On the technical side: Gradient descent/SGD, Transformers, adversarial Attacks, RL basics, RLHF, Evals and Mechanistic Interpretability. On the conceptual side we looked at: Timelines and what they mean, threat models, risks from AI systems, proposed solutions for alignment and AI governance. There was also a longer literature review session of our choice, and the last 1.5 days were focused on a project we chose. My impression was that the idea of the bootcamp is to expose you to a wide range of subfields in AI safety, so that you continue researching or working on the fields that you find interesting, rather than making you an expert in any of these things. For example, If you are already set on doing e.g. mechanistic interpretability, the bootcamp will see you spending 95% of your time on other topics, and might not be the best use of your time.

Additionally, a key emphasis of ML4G is on affecting peoples’ lives after the program. So we spent some time formulating our goals for the camp, in 1-on-1s for career advice/discussing career goals, and committing to certain actions after the camp (like the writing group that this post was created in), etc.

Things I liked

Things I would change

My personal experience

Coming into the camp, I wanted to connect with more people interested in AI safety, learn a few technical things in a group setting, e.g. gain a better understanding of transformers and RLHF, and find a suitable area of AI safety for me to work in. I'd say these were all fulfilled: I learned about a few different orgs I hadn't heard of before, got a good broad overview of the field, and was able to get some advice on my career plans.

One of my favorite aspects was the community of the cohort; we had lots of self-organized activities in our free time (such as swimming in the sea - rather cold), people playing music in the evening, or sitting together playing games, etc. This also extended to the learning: people would pair up to work through the notebooks, explain concepts to each other, or help out other participants if they lacked the background for a certain topic.

Overall, ML4G is suitable and a great experience if you're anywhere from completely new to "don't exactly know what I want to focus on" in AI safety. The camp is probably not right for you if you want to significantly increase your technical mastery in a specific domain of AI safety. However, even if you already have a specific area you want to work in or learn more about, I recommend the camp to build a more well-rounded picture of AI safety and ensure that your future work is impactful in your assessment.

Shoutouts

Many thanks to Lovkush A, Atlanta N and Mick Z for proofreading and many helpful comments.

Previous experience reports:

Report 1

Report 2



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ML4Good UK 人工智能安全 训练营 AI安全
相关文章