少点错误 2024年07月23日
ML Safety Research Advice - GabeM
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章针对AI安全研究领域中的经验性机器学习研究职业发展给出了建议,涵盖了职业规划、技能提升、研究生学习以及研究员生活等方面,并强调了实践和迭代的重要性,以及培养良好的研究品味。

🤔 **职业规划:** 文章建议参考一些职业指南,了解AI安全研究的路径,并根据自身情况决定是否选择研究生学习。文章还强调了实践的重要性,建议通过参与研究项目、复制论文等方式积累经验。

🚀 **技能提升:** 文章推荐了一系列资源帮助提升AI安全知识和技术技能,包括学习AI安全基础知识、通过在线课程快速学习技术知识、通过实践项目积累经验等。文章还提醒要避免过度学习,要注重实践,并强调了基础深度学习知识的重要性。

🎓 **研究生学习:** 文章建议只有在有明确的研究方向、合适的导师以及良好的职业发展目标的情况下才考虑读博,否则可能浪费时间。文章还介绍了其他研究生选择,如硕士学位或中途退出博士项目。

🔬 **研究员生活:** 文章强调了研究员要追求卓越,建议参考Hamming的“你与你的研究”获得灵感和专注。文章还给出了研究技巧,包括:将研究视为随机决策过程,尽快接触现实,模仿成功案例,使用科学方法,保持实验室笔记本,以及培养良好的研究品味。

🧠 **研究品味:** 文章强调了培养良好的研究品味的重要性,建议提前思考未来的AI发展趋势,并进行相关研究。文章还建议预测发表论文的评价,了解ML社区的思维模式,并培养对研究的“情感模型”。

Published on July 23, 2024 1:45 AM GMT

This is my advice for careers in empirical ML research that might help AI safety (ML Safety). Other ways to improve AI safety, such as through AI governance and strategy, might be more impactful than ML safety research (I generally think they are). Skills can be complementary, so this advice might also help AI governance professionals build technical ML skills.

1. Career Advice

1.1 General Career Guides

2. Upskilling

2.1 Fundamental AI Safety Knowledge

2.2 Speedrunning Technical Knowledge in 12 Hours

2.3 How to Build Technical Skills

2.4 Math

3. Grad School

3.1 Why to Do It or Not

3.2 How to Get In

3.3 How to Do it Well

4. The ML Researcher Life

4.1 Striving for Greatness as a Researcher

4.2 Research Skills

4.3 Research Taste

4.4 Academic Collaborations

4.5 Writing Papers

4.6 Publishing

4.7 Publicizing

5. Staying Frosty

5.1 ML Newsletters I Like

5.2 Keeping up with ML Research

    Get exposure to the latest papers
      Follow a bunch of researchers you like and some of the researchers they retweet on Twitter.Join AI safety Slack workspaces for organic paper-sharing. If you can't access these, you can ask Aaron Scher to join his Slack Connect paper channel.Subscribe to the newsletters above.
    Filter down to only the important-to-you papers
      There’s a lot of junk out there. Most papers (>99%) won't stand the test of time and won't matter in a few monthsFocus on papers with good engagement or intriguing titles/diagrams. Don’t waste time on papers that don’t put in the effort to communicate their messages wellFilter aggressively based on your specific research interests
    Get good at efficiently reading ML papers
      Don't read ML papers like books, academic papers from other disciplines, or otherwise front-to-back/word-for-wordRead in several passes of increasing depth: Title, Abstract, First figure, All figures, Intro/Conclusion, Selected sectionsStop between passes to evaluate understanding and implications
        Do I understand the claims this paper is making?Do I think this paper establishes sufficient evidence for these claims?What are the implications of these claims?Is it valuable to keep reading?
      Aim to extract useful insights in 10-15 minutesFor most papers, I stop within the first 3-4 passes
        "Oh, that might be a cool paper on Twitter" -> open link -> look at title -> skim abstract -> look at 1-3 figures -> "Ahh, that's probably what that's about" -> decide whether to remember it, forget about it, or, rarely, read more
      You can usually ignore the "Related Work" section. It's often just the authors trying to cite everyone possibly relevant to the subfield who might be an anonymous peer reviewer for conference admissions, or better yet, it’s a takedown of related papers to signal why the new paper is novel.
        Sometimes, it is useful to contextualize how a non-groundbreaking paper fits into the existing literature, which can help you decide whether to read more.
      Nowadays, lead authors often post accessible summaries of the most important figures and insights from their papers in concise Twitter threads. Often, you can just read those and move onSome resources I like for teaching how to read ML papers
    Practice reading papers
      Skim at least 1 new paper per dayA lot of the burden of understanding modern ML lies in knowing the vast context in which papers are situated
        Over time, you'll not only get faster at skimming, you'll also build more context that will make you have to look fewer things upE.g. "this paper studies [adversarial prompt attacks] on [transformer]-based [sentiment classification] models" is a lot easier to understand if you know what each of those [things] are.
      It gets easy once you do it each day, but doing it each day is the hard part.
    Other tips
      Discussing papers with others is super important and a great way to amplify your learning without costing mentorship time!Understand arXiv ID information: arxiv.org/abs/2302.08582 means it's the 8582nd paper (08582) pre-printed in February (02) 2023 (23)https://alphaxiv.org/ lets people publicly comment on arXiv papers

6. Hiring ML Talent

6.1 Finding ML Researchers

6.2 Finding ML Safety-Focused Candidates

6.3 Incentives

Acknowledgments

Many thanks to Karson Elmgren and Ella Guest for helpful feedback and to several other ML safety researchers for past discussions that informed this piece!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 机器学习 研究 职业发展 研究生
相关文章